Amazon FSx for NetApp ONTAPにおける Temperature Sensitive Storage Efficiency (TSSE) を完全に理解しようとしてみた
Amazon FSx for NetApp ONTAPにおける Temperature Sensitive Storage Efficiency (TSSE) が難しすぎる
こんにちは、のんピ(@non____97)です。
皆さんはAmazon FSx for NetApp ONTAP(以降FSxN)における Temperature Sensitive Storage Efficiency (以降TSSE) が難しすぎると感じたことはありますか? 私はあります。
TSSEとはONTAPにおける重複排除や圧縮、コンパクションなどのデータ効率化機能のことです。
TSSEの理解が難しいポイントとしてはNetApp公式ドキュメントにもAWS公式ドキュメントにもTSSEの挙動を網羅的に解説しているドキュメントがない点にあります。
SB C&SさんがONTAP 9.8時点でのTSSEについて解説されています。ただし、FSxNにおいては一般的なTSSEと挙動が異なります。2023/12/11時点のFSxNで使えるONTAP 9.13.1との違いも気になるところです。
そこで、FSxNにおけるTSSEを完全に理解するために、私が知り得た情報を整理して行きます。
いきなりまとめ
- TSSEとはHotデータとColdデータで圧縮レベルを変更する仕組み
- HotデータとColdデータの判定はアクセスおよび更新されてからの時間
- デフォルトでは14⽇以上経過したブロックをColdとして判定
- Hotデータを処理するインライン圧縮では8KB圧縮が行われ、50%以上の圧縮効果が得られない場合は圧縮されない
- ColdデータであるInactive data compressionでは32KB圧縮が行われ、25%以上の圧縮効果が得られない場合は圧縮されない
- 32KB圧縮だが、従来のSecondary Compressionとは別物
- コンパクションやTSSEの圧縮はボリュームレベルではなくaggregateレベルで計算される
- どの程度データ削減されたかは
aggr show-efficiency
で確認可能 - ただし、
aggr show-efficiency
ではコンパクションや圧縮によって、それぞれ個別でどのぐらいデータが削減されたかを確認することはFSxNの仕様上できない
- どの程度データ削減されたかは
- 例外として、圧縮によって削減されたデータ量は
volume show-footprint
のAuto Adaptive Compressionで確認可能 - Inactive data compressionはTSSEにおけるポストプロセス圧縮
- TSSEの圧縮はAuto Adaptive Compressionとして扱われる
- Volume Compression ではない
- TSSEではWAFLコンテナ単位(データブロック単位)で圧縮を行う
- TSSEでインラインのデータ削減処理を以下の順番で行う
- インラインゼロブロック重複排除
- インライン重複排除
- インライン圧縮
- インラインコンパクション
- TSSEでは以下の条件でポストプロセスのデータ削減処理を行う
- ポストプロセス重複排除 : Change log が閾値(デフォルト20%)を超えたタイミング
- ポストプロセス圧縮 : 設定した閾値(デフォルト14日)経過したデータブロックをCold Dataとして判定したタイミング
- Change logはONTAPのデータブロック(4KB)毎に生成されるFoot printのサイズ
- 1データブロックのFoot printのサイズは32Byte
- ボリュームサイズに応じて、Storage Efficiencyを実行するChange logの絶対量が変動する
- FSxNではTSSEがデフォルトで有効
- Storage EfficiencyがTSSEであるかははStorage Efficiency Modeが
efficienct
であることを確認する
- Storage EfficiencyがTSSEであるかははStorage Efficiency Modeが
- ただし、以下のデータ削減処理は無効になっている
- クロスボリュームインライン重複排除
- クロスボリュームバックグラウンド重複排除
- ポストプロセス圧縮
- 以下を有効化することはできない
- クロスボリュームインライン重複排除
- クロスボリュームバックグラウンド重複排除
- Inactive data compressionを有効化していも
volume efficiency show
のCompressionは有効化したことにはならない- その影響で、本来であれば
volume efficiency start
で重複排除と圧縮が実行されるが、圧縮は実行されない - ポストプロセス圧縮を手動で実行したい場合は
volume efficiency inactive-data-compression start
を別で実行する必要がある
- その影響で、本来であれば
- ポストプロセスで圧縮をしたい場合はInactive data compressionを有効化する必要がある
- ポストプロセス重複排除は、以前のポストプロセス重複排除時のスキャン済みのデータブロックとの重複排除が効く
volume efficiency start -scan-old-data
やvolume efficiency inactive-data-compression start
を実行すると、ディスクスループットやディスクIOPS、CPU使用率が大きく跳ねる- パフォーマンスに大きな影響を与える可能性が高いので注意が必要
- 2023/12/5時点の ONTAP 9.13.1P5 の FSxN の inactive-data-compression の実行間隔は24時間で固定
TSSEとは
TSSEの概要
TSSEとはTemperature Sensitive Storage Efficiencyというようにデータの温度で圧縮レベルを変更する仕組みです。
アクセス頻度の高いデータはHotデータ、アクセス頻度の低いデータはColdデータとして扱います。そして、HotデータとColdデータ毎に圧縮のブロック長を変化させることで、効率の良い圧縮を実現させています。
ONTAP は、ボリュームのデータへのアクセス頻度を評価し、その頻度とデータに適用される圧縮レベルをマッピングすることで、温度に影響されるStorage Efficiencyのメリットを提供します。アクセス頻度の低いコールドデータの場合は大容量のデータブロックが圧縮され、頻繁にアクセスされて上書きされるホットデータの場合は小さなデータブロックが圧縮されるため、プロセスが効率化されます。
具体的には以下のように圧縮を行います。
- Hotデータ : 8KB単位で圧縮
- Coldデータ : 32KB単位で圧縮
ちょうどHotデータはAdaptive Compression、ColdデータはSecondary Compressionがかかるようなイメージです。
圧縮を行う判定もAdaptive CompressionやSecondary Compressionと同様です。
Hotデータは50%以上、Coldデータは25%以上の圧縮効果が得られない場合は圧縮されません。効果が薄い圧縮処理は行わないことで、ONTAPにかかる負荷を低減しています。
HotデータとColdデータの判定はアクセスおよび更新されてからの時間で行います。デフォルトでは14⽇以上経過したブロックをColdとします。こちらは0〜60日で設定可能です。設定はvolume efficiency inactive-data-compression modifyで行えます。
[-threshold-days
] - Inactive data compression scan threshold days value
Threshold days value for inactive data compression scan.[-threshold-days-min
] - Inactive data compression scan threshold minimum allowed value.
Minimum allowed value in threshold days for inactive data compression scan.[-threshold-days-max
] - Inactive data compression scan threshold maximum allowed value.
Maximum allowed value in threshold days for inactive data compression scan.
従来、ポストプロセス圧縮はインライン圧縮で圧縮されなかったデータブロックを圧縮する挙動でした。TSSEによる圧縮はインライン圧縮されたデータブロックをバックグラウンドでポストプロセス圧縮してくれます。
再圧縮をしてくれる分、全体の圧縮率が高まり、同じ物理データ使用量でもより多くのデータを保存することができます。
なお、TSSEはFabricPoolのローカル(FSx for ONTAPでいうところのプライマリストレージ)のみサポートしています。キャパシティプールストレージ上のデータブロックに対して追加の重複排除や圧縮などをかけることはできないため注意しましょう。
Temperature-sensitive storage efficiency
Beginning in ONTAP 9.8, temperature-sensitive storage efficiency (TSSE) is available. TSSE uses temperature scans to determine how hot or cold data is and compresses larger or smaller blocks of data accordingly — making storage efficiency more efficient
Beginning in ONTAP 9.10.1, TSSE is supported on volumes located on FabricPool-enabled local tiers (storage aggregates). TSSE compression-based storage efficiencies are preserved when tiering to cloud tiers. Although more efficient, smaller blocks will require smaller GETs, reducing GET performance from the cloud tier.
TSSEの処理の流れ
上述のTSSEの内容から「圧縮の処理方法だけが変わる」と思われたかも知れません。
実は圧縮や重複排除などがかかる順番や、圧縮の単位も異なります。
従来の圧縮はファイル単位で行われていました。これがTSSEではWAFLコンテナ単位(データブロック単位)で行われます。
WAFLの内部の仕組みが気になる方は以下論文や特許をご覧ください。
また、インラインのデータ削減処理は以下のような順番で行われていました。
- インラインゼロブロック重複排除
- インライン圧縮
- インライン重複排除
- インラインコンパクション
これがTSSEの場合は、以下のように重複排除と圧縮の順番が入れ替わります。
- インラインゼロブロック重複排除
- インライン重複排除
- インライン圧縮
- インラインコンパクション
圧縮の後にコンパクションがかかるようになるので、圧縮によって発生した4KBの物理データブロックがフルで埋まっていないデータブロックをまとめやすくなったと考えます。
抜粋 : ONTAP の Storage Efficiency 機能
ポストプロセスの重複排除や圧縮については従来と同様です。
- ポストプロセス重複排除 : Change Log が閾値(デフォルト20%)を超えたタイミング
- ポストプロセス圧縮 : データブロックへのReadやWriteが最後に行われてからの経過日数が閾値(デフォルト14日)を超過して、Cold Dataとして判定したタイミング
また、ポストプロセス圧縮実行完了後にコンパクションが動作します。これにより、圧縮で小さくなったデータブロックをまとめる動きをしてくれます。
Execution of inactive data compression increases data compaction savings as well
Applies to
Amazon FSx for Netapp ONTAP
Answer
- Data Compaction increases because compression savings are considered in data compaction savings.
- Compaction runs followed by compression.
Additional Information
Output reference :
- Before executing inactive data compression :
FsxId*> aggr show -fields data-compaction-space-saved, sis-space-saved aggregate data-compaction-space-saved sis-space-saved --------- --------------------------- --------------- aggr1 1.51GB 1.51GB
- After executing inactive data compression :
FsxId*> volume efficiency inactive-data-compression start -volume vol2 -inactive-days 0 FsxId*> aggr show -fields data-compaction-space-saved, sis-space-saved aggregate data-compaction-space-saved sis-space-saved --------- --------------------------- --------------- aggr1 1.87GB 1.87GB
ポストプロセス重複排除で使われるChange Logとはボリュームに書き込まれる新規ブロックのFinger printのことです。
Deduplication runs on the active file system. Therefore, as additional data is written to the deduplicated volume, fingerprints are created for each new block and written to a change log file. For subsequent deduplication operations, the change log is sorted and merged with the fingerprint file, and the deduplication operation continues with fingerprint comparisons as previously described.
Change LogのFinger printサイズは以下KBによると32Byteであることが分かります。
- The changelog records modifications to data blocks. Once deduplication is started, Data ONTAP refers to the changelog to deduplicate the data, and clears the changelog when the deduplication process is complete.
- Changelog can hold records of modifications of blocks up to a maximum size of 8TB. This maximum size cannot be changed.
- A changelog size of 64GB means that changelogging will continue new writes till 8 TB worth of data is written to a volume.
8TB user data = 2147483648 data blocks in a volume (4k per block size)
Fingerprint size = 32bytes
Thus, the total changeloge size supported = 2147483648 * 32bytes = 64GBHow does the deduplication changelog work? - NetApp Knowledge Base
ポストプロセス重複排除のトリガーのデフォルトの閾値が20%であることはNetApp公式ドキュメントにも記載されています。
You can modify the efficiency operation schedule to run deduplication or data compression when the number of new blocks written to the volume after the previous efficiency operation (performed manually or scheduled) exceeds a specified threshold percentage.
About this task
If the schedule option is set to auto, the scheduled efficiency operation runs when the amount of new data exceeds the specified percentage. The default threshold value is 20 percent. This threshold value is the percentage of the total number of blocks already processed by the efficiency operation.
Run efficiency operations depending on the amount of new data written
ポストプロセス圧縮がかかるタイミングは通常のONTAPでは任意の間隔で設定可能です。ただし、FSxNの場合は24時間間隔から変更することはできません。
全体の処理の流れをまとめると以下のようになります。
TSSEの設定方法
以下KBからTSSEとするためにはStorage Efficiency Modeがefficienct
である必要があることが分かります。
- ONTAP 9.8 introduced Temperature Sensitive Storage Efficiency (TSSE) on newly created volumes on AFF platforms.
- Existing volumes created prior to upgrading to ONTAP 9.8 or 9.9 did not have TSSE enabled
- Starting in ONTAP 9.10.1, TSSE can be enabled on existing volumes
- The volume efficiency modify command includes the storage-efficiency-mode parameter to set as default or efficient
- Refer to Temperature-sensitive storage efficiency overview for the difference between these two options
- If desired to set TSSE enabled for specific volumes, use the volume efficiency modify command with storage-efficiency-mode set to efficient
- Following values are possible for storage-efficiency-mode during volume efficiency modify:
- default - Volume is having file based compression scheme
- efficient - Volume is having TSSE based compression scheme
- Setting "Storage-efficiency-mode" to "efficient" is same as enabling TSSE using "volume efficiency modify" ("application-io-size" will be set as "auto") and also all forms of deduplication will be enabled.
- Setting "Storage-efficiency-mode" to "default" is same as enabling file based compression ("application-io-size" will be set as "8k").
How do I enable TSSE on volumes created prior to 9.8 - NetApp Knowledge Base
efficient
にすると、自動でアプリケーションIOサイズ(application-io-size
)がAutoになるようですね。アプリケーションIOサイズがAutoとすることで、8KBと32KBの圧縮を圧縮タイミングで自動で切り替えるためということですね、アプリケーションIOサイズがAutoであるため、Auto Adaptive Compressionとも呼ばれます。
なお、以下記事で試しているようにポストプロセス圧縮(Compression)を有効化しようとしても有効化できません。これはFSxNの仕様上、どうしようもありません。
また、全種類の重複排除が有効になるようです。ただし、FSxNではポストプロセス圧縮と同様にクロスボリュームインライン重複排除、クロスボリュームバックグラウンド重複排除を有効化することはできません。
FSxNでStorage Efficiencyを有効化した場合の設定の確認
Storage Efficiencyが無効なボリュームの設定
実際に検証しながらTSSEの挙動を確認してみます。
まず、FSxNでStorage Efficiencyを有効化した場合の設定を確認します。
一旦、マネジメントコンソールからStorage Efficiencyを無効化したボリュームを作成します。
作成したボリュームのStorage Efficiency、Inactive data compressionの設定は以下のとおりです。
::> set diag Warning: These diagnostic commands are for use by NetApp personnel only. Do you want to continue? {y|n}: y ::*> volume efficiency show -volume vol1 -instance Vserver Name: fsx Volume Name: vol1 Volume Path: /vol/vol1 State: Disabled Auto State: - Status: Idle Progress: Idle for 00:12:32 Type: Regular Schedule: - Efficiency Policy Name: auto Efficiency Policy UUID: 9dc35dc6-94c7-11ee-a4d2-3159877d4601 Optimization: space-saving Min Blocks Shared: 1 Blocks Skipped Sharing: 0 Last Operation State: Success Last Success Operation Begin: Thu Dec 07 06:15:01 2023 Last Success Operation End: Thu Dec 07 06:15:01 2023 Last Operation Begin: Thu Dec 07 06:15:01 2023 Last Operation End: Thu Dec 07 06:15:01 2023 Last Operation Size: 0B Last Operation Error: - Operation Frequency: - Changelog Usage: 0% Changelog Size: 0B Vault transfer log Size: 0B Compression Changelog Size: 0B Changelog Overflow: 0B Logical Data Size: 308KB Logical Data Limit: 1.25PB Logical Data Percent: 0% Queued Job: - Stale Fingerprint Percentage: 0 Stage: - Checkpoint Time: No Checkpoint Checkpoint Operation Type: - Checkpoint Stage: - Checkpoint Substage: - Checkpoint Progress: - Fingerprints Gathered: 0 Blocks Processed For Compression: 0 Gathering Begin: - Gathering Phase 2 Begin: - Fingerprints Sorted: 0 Duplicate Blocks Found: 0 Sorting Begin: - Blocks Deduplicated: 0 Blocks Snapshot Crunched: 0 De-duping Begin: - Fingerprints Deleted: 0 Checking Begin: - Compression: false Inline Compression: false Application IO Size: auto Compression Type: adaptive Storage Efficiency Mode: efficient Verify Trigger Rate: 20 Total Verify Time: 00:00:00 Verify Suspend Count: - Constituent Volume: false Total Sorted Blocks: 0 Same FP Count: 0 Same FBN: 0 Same Data: 0 No Op: 0 Same VBN: 0 Mismatched Data: 0 Same Sharing Records: 0 Max RefCount Hits: 0 Stale Recipient Count: 0 Stale Donor Count: 0 VBN Absent Count: 0 Num Out Of Space: 0 Mismatch Due To Overwrites: 0 Stale Auxiliary Recipient Count: 0 Stale Auxiliary Recipient Block Count: 0 Mismatched Recipient Block Pointers: 0 Unattempted Auxiliary Recipient Share: 0 Skip Share Blocks Delta: 0 Skip Share Blocks Upper: 0 Inline Dedupe: false Data Compaction: false Cross Volume Inline Deduplication: false Compression Algorithm: lzopro Cross Volume Background Deduplication: false Extended Compressed Data: true Volume has auto adaptive compression savings: true Volume doing auto adaptive compression: false auto adaptive compression on existing volume: false File compression application IO Size: - Compression Algorithm List: lzopro Compression Begin Time: - Number of L1s processed by compression phase: 0 Number of indirect blocks skipped by compression phase: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Volume Has Extended Auto Adaptive Compression: true ::*> volume efficiency inactive-data-compression show -instance Volume: vol1 Vserver: fsx Is Enabled: false Scan Mode: - Progress: IDLE Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 0 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 0 Time since Last Inactive Data Compression Scan ended(sec): 0 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 0 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 14 Threshold Upper Limit: 21 Threshold Lower Limit: 14 Client Read history window: 14 Incompressible Data Percentage: 0%
どちらも無効になっていますね。
Storage Efficiency Modeがefficient
であることからTSSEではあるようですが、Inline DedupeやData Compactionなど諸々無効化されています。
Storage Efficiencyの有効化
それではマネジメントコンソールからStorage Efficiencyを有効にします。
有効化した後、管理アクティビティの監査ログを確認して、どのような変更が行われたのかチェックします。
::*> security audit log show -fields timestamp, node, application, vserver, username, input, state, message -state Error|Success -timestamp >"Thu Dec 7 06:30:00 2023" timestamp node application vserver username input state message -------------------------- ------------------------- ----------- ---------------------- -------- ----------------------------------------------------------------- ------- ------- "Thu Dec 07 06:30:35 2023" FsxId04076278992c2097a-01 http FsxId04076278992c2097a admin GET /api/private/cli/storage/failover?fields=node,possible,reason Success - "Thu Dec 07 06:30:36 2023" FsxId04076278992c2097a-01 http FsxId04076278992c2097a admin GET /api/private/cli/storage/aggregate?fields=raidstatus%2Ccomposite%2Croot%2Cuuid Success - "Thu Dec 07 06:33:47 2023" FsxId04076278992c2097a-01 http FsxId04076278992c2097a fsx-control-plane PATCH /api/storage/volumes/f3de94f5-94c7-11ee-a274-2d5895636ad1 : {"size":68719476736,"nas":{},"efficiency":{"compression":"inline","compaction":"inline","dedupe":"both","cross_volume_dedupe":"none"}} Success - 3 entries were displayed.
以下が有効になったようです。
- インライン圧縮
- インラインコンパクション
- インライン重複排除
- ポストプロセス重複排除
また、明示的にクロスボリューム重複排除は無効化されています。
設定変更後のボリュームのStorage Efficiency、Inactive data compressionの設定は以下のとおりです。
::*> volume efficiency show -volume vol1 -instance Vserver Name: fsx Volume Name: vol1 Volume Path: /vol/vol1 State: Enabled Auto State: Auto Status: Idle Progress: Idle for 00:22:35 Type: Regular Schedule: - Efficiency Policy Name: auto Efficiency Policy UUID: 9dc35dc6-94c7-11ee-a4d2-3159877d4601 Optimization: space-saving Min Blocks Shared: 1 Blocks Skipped Sharing: 0 Last Operation State: Success Last Success Operation Begin: Thu Dec 07 06:15:01 2023 Last Success Operation End: Thu Dec 07 06:15:01 2023 Last Operation Begin: Thu Dec 07 06:15:01 2023 Last Operation End: Thu Dec 07 06:15:01 2023 Last Operation Size: 0B Last Operation Error: - Operation Frequency: - Changelog Usage: 0% Changelog Size: 0B Vault transfer log Size: 0B Compression Changelog Size: 0B Changelog Overflow: 0B Logical Data Size: 308KB Logical Data Limit: 1.25PB Logical Data Percent: 0% Queued Job: - Stale Fingerprint Percentage: 0 Stage: - Checkpoint Time: No Checkpoint Checkpoint Operation Type: - Checkpoint Stage: - Checkpoint Substage: - Checkpoint Progress: - Fingerprints Gathered: 0 Blocks Processed For Compression: 0 Gathering Begin: - Gathering Phase 2 Begin: - Fingerprints Sorted: 0 Duplicate Blocks Found: 0 Sorting Begin: - Blocks Deduplicated: 0 Blocks Snapshot Crunched: 0 De-duping Begin: - Fingerprints Deleted: 0 Checking Begin: - Compression: false Inline Compression: true Application IO Size: auto Compression Type: adaptive Storage Efficiency Mode: efficient Verify Trigger Rate: 20 Total Verify Time: 00:00:00 Verify Suspend Count: - Constituent Volume: false Total Sorted Blocks: 0 Same FP Count: 0 Same FBN: 0 Same Data: 0 No Op: 0 Same VBN: 0 Mismatched Data: 0 Same Sharing Records: 0 Max RefCount Hits: 0 Stale Recipient Count: 0 Stale Donor Count: 0 VBN Absent Count: 0 Num Out Of Space: 0 Mismatch Due To Overwrites: 0 Stale Auxiliary Recipient Count: 0 Stale Auxiliary Recipient Block Count: 0 Mismatched Recipient Block Pointers: 0 Unattempted Auxiliary Recipient Share: 0 Skip Share Blocks Delta: 0 Skip Share Blocks Upper: 0 Inline Dedupe: true Data Compaction: true Cross Volume Inline Deduplication: false Compression Algorithm: lzopro Cross Volume Background Deduplication: false Extended Compressed Data: true Volume has auto adaptive compression savings: true Volume doing auto adaptive compression: true auto adaptive compression on existing volume: false File compression application IO Size: - Compression Algorithm List: lzopro Compression Begin Time: - Number of L1s processed by compression phase: 0 Number of indirect blocks skipped by compression phase: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Volume Has Extended Auto Adaptive Compression: true ::*> volume efficiency inactive-data-compression show -instance Volume: vol1 Vserver: fsx Is Enabled: false Scan Mode: - Progress: IDLE Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 0 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 0 Time since Last Inactive Data Compression Scan ended(sec): 0 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 0 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 14 Threshold Upper Limit: 21 Threshold Lower Limit: 14 Client Read history window: 14 Incompressible Data Percentage: 0%
Storage Efficiencyが有効になりましたね。管理アクティビティの監査ログで指定されていた項目も有効になっています。
Inactive data compressionについては変わらず無効化のままでした。
ポストプロセス圧縮やクロスボリューム重複排除の有効化
ポストプロセス圧縮の有効化
ポストプロセス圧縮とクロスボリューム重複排除が有効にできないのか改めて確認してみます。
volume efficiency modifyのサジェストを確認すると、-compression
や-cross-volume-inline-dedupe
、-cross-volume-background-dedupe
があるため、変更の指示は出せそうです。
::*> volume efficiency modify -volume vol1 ? [ -vserver <vserver name> ] Vserver Name (default: fsx) { [ -schedule <text> ] Schedule | [ -policy <text (size 1..32)> ] } Efficiency Policy Name [ -optimize {performance|space-saving} ] *Optimization [ -min-blocks-shared {1..64} ] *Min Blocks Shared { [ -compression {true|false} ] Compression [ -inline-compression {true|false} ] Inline Compression [ -application-io-size <Application I/O Size> ] *Application IO Size [ -compression-type {none|secondary|adaptive} ] *Compression Type | [ -storage-efficiency-mode {default|efficient} ] } Storage Efficiency Mode [ -verify-trigger-rate {1..300} ] *Verify Trigger Rate [ -inline-dedupe {true|false} ] Inline Dedupe [ -data-compaction {true|false} ] Data Compaction [ -cross-volume-inline-dedupe {true|false} ] Cross Volume Inline Deduplication [ -compression-algorithm {lzopro|gzip|lzrw1a|zstd} ] *Compression Algorithm [ -cross-volume-background-dedupe {true|false} ] Cross Volume Background Deduplication
まず、ポストプロセス圧縮から試します。
::*> volume efficiency modify -volume vol1 -compression true ::*> volume efficiency show -volume vol1 -fields compression vserver volume compression ------- ------ ----------- fsx vol1 false
コマンドの入力は正常に受け付けられましたが、値は変わりありませんでした。
クロスボリューム重複排除の有効化
続いて、クロスボリューム重複排除です。
::*> volume efficiency modify -volume vol1 -cross-volume-inline-dedupe true -cross-volume-background-dedupe true Error: command failed: Failed to modify efficiency configuration for volume "vol1" of Vserver "fsx": Cross volume deduplication is supported only on volumes that are owned by nodes that are All-Flash optimized personality enabled.
こちらはエラーとなってしまいました。
以下KBのようにsystem node run -node * -command "options sis.idedup_allow_non_aff_hya on"
を叩こうにも、2023/12/8時点のFSxNではnode run
コマンドを実行することができないため、クロスボリューム重複排除を有効化する方法はなさそうです。
ポストプロセス重複排除の確認
テストファイルの作成
次にポストプロセス重複排除の確認を行います。
具体的にはChange logが20%になったタイミングでポストプロセス重複排除が走り始めるのかです。
FSxNボリュームをマウントして/usr
をコピーします。
$ sudo mount -t nfs4 svm-034fa6e069121b3d1.fs-04076278992c2097a.fsx.us-east-1.amazonaws.com:/vol1 /mnt/fsxn/vol1 $ df -hT -t nfs4 Filesystem Type Size Used Avail Use% Mounted on svm-034fa6e069121b3d1.fs-04076278992c2097a.fsx.us-east-1.amazonaws.com:/vol1 nfs4 61G 320K 61G 1% /mnt/fsxn/vol1 $ sudo cp -pr /usr /mnt/fsxn/vol1/usr1
Storage Efficiencyの状態を確認すると、まだ動作していないようでした。Change logが9.49MBに増えていますが。使用率が1%であるためです。
::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress vserver volume state progress changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- --------------- -------------- ----------------- fsx vol1 Enabled Idle for 00:42:17 1% 9.49MB 1.21GB
この記事の冒頭でも紹介したとおり、重複排除に使用するFinger Printのサイズは32Byteです。そして、Finger PrintはONTAPのデータブロック4KB毎に作成されます。
書き込んだ論理データサイズが1.21GBなのであれば、データブロック数は(1.21 GB × 1,024 × 1,024) / 4KB = 317,194.24
個です。1データブロックに対して32Byteなので317,194.24データブロック * 32 Byte / 1,024 / 1,024 = 9.68 MiB
と、表示されたChange logのサイズとおおよそ同じであることが分かります。
テストファイルをさらに追加してみます。
$ sudo cp -pr /usr /mnt/fsxn/vol1/usr2
Storage Efficiencyの状態を確認すると、Change logが16.17MBに増えていました。
::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress vserver volume state progress changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- --------------- -------------- ----------------- fsx vol1 Enabled Idle for 00:47:13 2% 16.17MB 2.56GB
論理データサイズに比べてChanege logのサイズの増加量が少ないのは、もしかするとインライン重複排除が原因かもしれませんね。
実際に確認してみると重複排除により971MBのデータを削減しているようでした。
::*> volume show -volume vol1 -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- ---- --------- --------------- ------- ------ ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- fsx vol1 64GB 0 58.78GB 64GB 60.80GB 2.01GB 3% 971.0MB 32% 971.0MB 32% 642.0MB 0B 0% 2.96GB 5% - 2.96GB 0B 0%
以上の挙動から以下のことが分かります。
- インライン重複排除が動作してもStorage EfficiencyのProgressには反映されない
- Change logのサイズはインライン重複排除やインライン圧縮などのデータ削減処理が行われた後の、物理的なデータサイズに基づいて計算される
ボリュームサイズがChange logの使用率に影響を及ぼすのか確認
ふと、ボリュームサイズがChange logの使用率に影響を及ぼすのかが気になってきました。
ボリュームサイズを小さくした時にChange logの閾値の絶対量が小さくなるのであれば、同じデータサイズの書き込みでもポストプロセス重複排除の実行頻度が増すような気がします。
試しにボリュームサイズを32GBに変更します。
::*> volume modify -volume vol1 -size 32GB Volume modify successful on volume vol1 of Vserver fsx. ::*> volume show -volume vol1 -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- ---- --------- --------------- ------- ------ ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- fsx vol1 32GB 0 28.38GB 32GB 30.40GB 2.01GB 6% 971.0MB 32% 971.0MB 32% 642.0MB 0B 0% 2.96GB 10% - 2.96GB 0B 0%
Storage Efficiencyを確認すると、Change logの使用率が2%から5%に変わっていました。
::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress vserver volume state progress changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- --------------- -------------- ----------------- fsx vol1 Enabled Idle for 00:56:48 5% 18.81MB 2.96GB
さらに16GBに減らしてみます。
::*> volume modify -volume vol1 -size 16GB Volume modify successful on volume vol1 of Vserver fsx. ::*> volume show -volume vol1 -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- ---- --------- --------------- ------- ------ ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- fsx vol1 16GB 0 13.18GB 16GB 15.20GB 2.01GB 13% 971.0MB 32% 971.0MB 32% 642.0MB 0B 0% 2.96GB 19% - 2.96GB 0B 0%
Storage Efficiencyを確認すると、Change logの使用率が5%から11%に変わっていました。
::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress vserver volume state progress changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- --------------- -------------- ----------------- fsx vol1 Enabled Idle for 00:59:00 11% 18.81MB 2.96GB
ボリュームサイズが小さければ、少ない量の更新でポストプロセス重複排除が効きそうですね。
逆に、ボリュームを過剰にサイジングしてしまうと、「結構更新があるのになかなか重複排除が効かない」みたいなことになりそうですね。
ちなみに、ボリュームサイズが16GBなので、Change logの最大量は(16GB × 1,024 × 1,024 / 4KB) * 32Byte / 1024 / 1024 = 128MB
だと考えます。18.81MB / 128MB ≒ 0.147
なので、誤差は多少ありますがおおよそ一致しているでしょう。
ポストプロセス重複排除が自動で実行されることを確認
ファイルをさらに追加して、ポストプロセス重複排除が自動で実行されることを確認します。
$ sudo cp -pr /usr /mnt/fsxn/vol1/usr3
すると、Change logの使用率が20%を超えていないにも関わらず、ポストプロセス重複排除が動作し始めました。
::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress vserver volume state progress changelog-usage changelog-size logical-data-size ------- ------ ------- -------------------- --------------- -------------- ----------------- fsx vol1 Enabled 224700 KB (28%) Done 0% 20.11MB 3.73GB ::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress vserver volume state progress changelog-usage changelog-size logical-data-size ------- ------ ------- -------------------- --------------- -------------- ----------------- fsx vol1 Enabled 417192 KB (53%) Done 1% 20.57MB 3.77GB ::*> volume efficiency show -volume vol1 -instance Vserver Name: fsx Volume Name: vol1 Volume Path: /vol/vol1 State: Enabled Auto State: Auto Status: Active Progress: 696992 KB (89%) Done Type: Regular Schedule: - Efficiency Policy Name: auto Efficiency Policy UUID: 9dc35dc6-94c7-11ee-a4d2-3159877d4601 Optimization: space-saving Min Blocks Shared: 1 Blocks Skipped Sharing: 0 Last Operation State: Success Last Success Operation Begin: Thu Dec 07 06:15:01 2023 Last Success Operation End: Thu Dec 07 06:15:01 2023 Last Operation Begin: Thu Dec 07 06:15:01 2023 Last Operation End: Thu Dec 07 06:15:01 2023 Last Operation Size: 0B Last Operation Error: - Operation Frequency: Once approxmiately every 51 min(s) and 33 sec(s) Changelog Usage: 1% Changelog Size: 20.81MB Vault transfer log Size: 0B Compression Changelog Size: 0B Changelog Overflow: 0B Logical Data Size: 4.01GB Logical Data Limit: 1.25PB Logical Data Percent: 0% Queued Job: - Stale Fingerprint Percentage: 0 Stage: Saving Checkpoint Time: Thu Dec 7 07:23:51 UTC 2023 Checkpoint Operation Type: Start Checkpoint Stage: Saving_sharing Checkpoint Substage: - Checkpoint Progress: 0 KB (0%) Done Fingerprints Gathered: 0 Blocks Processed For Compression: 0 Gathering Begin: - Gathering Phase 2 Begin: - Fingerprints Sorted: 494841 Duplicate Blocks Found: 194771 Sorting Begin: Thu Dec 7 07:23:50 UTC 2023 Blocks Deduplicated: 174101 Blocks Snapshot Crunched: 0 De-duping Begin: Thu Dec 7 07:23:51 UTC 2023 Fingerprints Deleted: 0 Checking Begin: - Compression: false Inline Compression: true Application IO Size: auto Compression Type: adaptive Storage Efficiency Mode: efficient Verify Trigger Rate: 20 Total Verify Time: 00:00:00 Verify Suspend Count: - Constituent Volume: false Total Sorted Blocks: 494841 Same FP Count: 194771 Same FBN: 0 Same Data: 174101 No Op: 0 Same VBN: 0 Mismatched Data: 0 Same Sharing Records: 0 Max RefCount Hits: 0 Stale Recipient Count: 0 Stale Donor Count: 0 VBN Absent Count: 0 Num Out Of Space: 0 Mismatch Due To Overwrites: 0 Stale Auxiliary Recipient Count: 0 Stale Auxiliary Recipient Block Count: 0 Mismatched Recipient Block Pointers: 0 Unattempted Auxiliary Recipient Share: 0 Skip Share Blocks Delta: 0 Skip Share Blocks Upper: 0 Inline Dedupe: true Data Compaction: true Cross Volume Inline Deduplication: false Compression Algorithm: lzopro Cross Volume Background Deduplication: false Extended Compressed Data: true Volume has auto adaptive compression savings: true Volume doing auto adaptive compression: true auto adaptive compression on existing volume: false File compression application IO Size: - Compression Algorithm List: lzopro Compression Begin Time: - Number of L1s processed by compression phase: 0 Number of indirect blocks skipped by compression phase: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Volume Has Extended Auto Adaptive Compression: true ::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress vserver volume state progress changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- --------------- -------------- ----------------- fsx vol1 Enabled Idle for 00:00:28 1% 2.46MB 4.44GB
実際にポストプロセス重複排除がされているか確認します。
::*> volume show -volume vol1 -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- ---- --------- --------------- ------- ------ ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- fsx vol1 16GB 0 13.57GB 16GB 15.20GB 1.63GB 10% 2.81GB 63% 2.81GB 63% 1.17GB 0B 0% 4.44GB 29% - 4.44GB 0B 0% ::*> volume efficiency show -volume vol1 -instance Vserver Name: fsx Volume Name: vol1 Volume Path: /vol/vol1 State: Enabled Auto State: Auto Status: Idle Progress: Idle for 00:02:31 Type: Regular Schedule: - Efficiency Policy Name: auto Efficiency Policy UUID: 9dc35dc6-94c7-11ee-a4d2-3159877d4601 Optimization: space-saving Min Blocks Shared: 1 Blocks Skipped Sharing: 0 Last Operation State: Success Last Success Operation Begin: Thu Dec 07 07:23:50 2023 Last Success Operation End: Thu Dec 07 07:25:42 2023 Last Operation Begin: Thu Dec 07 07:23:50 2023 Last Operation End: Thu Dec 07 07:25:42 2023 Last Operation Size: 1.89GB Last Operation Error: - Operation Frequency: Once approxmiately every 54 min(s) and 26 sec(s) Changelog Usage: 1% Changelog Size: 2.64MB Vault transfer log Size: 0B Compression Changelog Size: 0B Changelog Overflow: 0B Logical Data Size: 4.44GB Logical Data Limit: 1.25PB Logical Data Percent: 0% Queued Job: - Stale Fingerprint Percentage: 0 Stage: Done Checkpoint Time: No Checkpoint Checkpoint Operation Type: - Checkpoint Stage: - Checkpoint Substage: - Checkpoint Progress: - Fingerprints Gathered: 0 Blocks Processed For Compression: 0 Gathering Begin: - Gathering Phase 2 Begin: - Fingerprints Sorted: 494841 Duplicate Blocks Found: 194771 Sorting Begin: Thu Dec 7 07:23:50 UTC 2023 Blocks Deduplicated: 194769 Blocks Snapshot Crunched: 0 De-duping Begin: Thu Dec 7 07:23:51 UTC 2023 Fingerprints Deleted: 0 Checking Begin: - Compression: false Inline Compression: true Application IO Size: auto Compression Type: adaptive Storage Efficiency Mode: efficient Verify Trigger Rate: 20 Total Verify Time: 00:00:00 Verify Suspend Count: - Constituent Volume: false Total Sorted Blocks: 494841 Same FP Count: 194771 Same FBN: 0 Same Data: 194769 No Op: 0 Same VBN: 0 Mismatched Data: 0 Same Sharing Records: 0 Max RefCount Hits: 0 Stale Recipient Count: 0 Stale Donor Count: 0 VBN Absent Count: 0 Num Out Of Space: 0 Mismatch Due To Overwrites: 0 Stale Auxiliary Recipient Count: 0 Stale Auxiliary Recipient Block Count: 0 Mismatched Recipient Block Pointers: 0 Unattempted Auxiliary Recipient Share: 0 Skip Share Blocks Delta: 0 Skip Share Blocks Upper: 0 Inline Dedupe: true Data Compaction: true Cross Volume Inline Deduplication: false Compression Algorithm: lzopro Cross Volume Background Deduplication: false Extended Compressed Data: true Volume has auto adaptive compression savings: true Volume doing auto adaptive compression: true auto adaptive compression on existing volume: false File compression application IO Size: - Compression Algorithm List: lzopro Compression Begin Time: - Number of L1s processed by compression phase: 0 Number of indirect blocks skipped by compression phase: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Volume Has Extended Auto Adaptive Compression: true
1.89GB処理されたようですね。
実際にポストプロセス重複排除が実行された前後で重複排除で削減されたデータ量を確認すると、971MBが2.81GBとなっています。
また、ポストプロセス重複排除が実行されると、Change logの使用量がリセットされることが分かります。
ここからも、差分ブロックが発生して閾値に達するたびにポストプロセス重複排除が動作する仕組みであることが分かります。
ポストプロセス重複排除が実行されるタイミングを探る
Change logの使用率20%未満でポストプロセス重複排除が実行されたのが気になります。
もう一度ファイルを追加して、どのタイミングでポストプロセス重複排除が動作するのか確認します。
$ sudo cp -pr /usr /mnt/fsxn/vol1/usr4
ファイル追加後のChange logを確認すると、あまり増えていません。インライン重複排除がしっかり効いていそうです。
::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress vserver volume state progress changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- --------------- -------------- ----------------- fsx vol1 Enabled Idle for 00:10:46 2% 4.22MB 5.92GB
実際に重複排除量を確認すると、2.81GBから4.07GBに書き込んだデータ量そのまま増えていました。
::*> volume show -volume vol1 -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- ---- --------- --------------- ------- ------ ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- fsx vol1 16GB 0 13.36GB 16GB 15.20GB 1.84GB 12% 4.07GB 69% 4.07GB 69% 1.17GB 0B 0% 5.92GB 39% - 5.92GB 0B 0%
時間を置いて書き込んだデータに対してインライン重複排除が効くということは、インライン重複排除はデータの書き込みのストリーム内で重複しているデータブロックを削減するのではなく、メモリバッファキャッシュ内に残っているデータブロックと比較して削除をしていそうですね。
インライン重複排除されないように、/dev/urandom
を使ってランダムな値を持つ1GiBのファイルを作成します。
$ sudo dd if=/dev/urandom of=/mnt/fsxn/vol1/test_file_1 bs=1M count=1024 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 6.51581 s, 165 MB/s
Change logの使用率を確認すると、2%から8%に増えていました。良い感じです。
::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress vserver volume state progress changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- --------------- -------------- ----------------- fsx vol1 Enabled Idle for 00:26:19 8% 14.24MB 6.93GB
試しに先ほど作成したファイルのコピーを作成します。
$ sudo cp /mnt/fsxn/vol1/test_file_1 /mnt/fsxn/vol1/test_file_1_copy
コピー後のChange logの使用率を確認すると、特に変動ありませんでした。インライン重複排除恐るべしです。
::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress vserver volume state progress changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- --------------- -------------- ----------------- fsx vol1 Enabled Idle for 00:28:54 8% 14.30MB 7.94GB
新規にてランダムな値を持つ1GiBのファイルを作成します。
$ sudo dd if=/dev/urandom of=/mnt/fsxn/vol1/test_file_2 bs=1M count=1024 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 6.57326 s, 163 MB/s
コマンドを叩いて即座にChange logを確認すると、以下のようにポストプロセス重複排除が実行されていました。
::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress vserver volume state progress changelog-usage changelog-size logical-data-size ------- ------ ------- ------------- --------------- -------------- ----------------- fsx vol1 Enabled 0 KB Searched 0% 17.02MB 8.47GB ::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress vserver volume state progress changelog-usage changelog-size logical-data-size ------- ------ ------- ------------- --------------- -------------- ----------------- fsx vol1 Enabled 0 KB Searched 3% 22.88MB 8.95GB ::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress vserver volume state progress changelog-usage changelog-size logical-data-size ------- ------ ------- -------------------- --------------- -------------- ----------------- fsx vol1 Enabled 130344 KB (28%) Done 4% 24.29MB 8.95GB ::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress vserver volume state progress changelog-usage changelog-size logical-data-size ------- ------ ------- -------------------- --------------- -------------- ----------------- fsx vol1 Enabled 348284 KB (76%) Done 4% 24.29MB 8.95GB ::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress vserver volume state progress changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- --------------- -------------- ----------------- fsx vol1 Enabled Idle for 00:00:32 4% 7.58MB 8.97GB
ポストプロセス重複排除の閾値であるChange logの使用率が20%の場合、Change logのサイズは128MB × 0.2 = 25.6MB
です。
ただし、Change logが17.02MBの時点でスキャンを開始しています。
もしかすると、書き込まれる量を事前に計算して、Change logの使用率が20%の超過が見込まれる場合はポストプロセス重複排除の実行を開始するのかもしれません。
ポストプロセス重複排除実行後のボリュームのStorage Efficiencyの情報は以下のとおりです。
::*> volume show -volume vol1 -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- ---- --------- --------------- ------- ------ ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- fsx vol1 16GB 0 11.75GB 16GB 15.20GB 3.45GB 22% 5.51GB 61% 5.51GB 61% 2.18GB 0B 0% 8.97GB 59% - 8.97GB 0B 0% ::*> volume efficiency show -volume vol1 -instance Vserver Name: fsx Volume Name: vol1 Volume Path: /vol/vol1 State: Enabled Auto State: Auto Status: Idle Progress: Idle for 00:02:01 Type: Regular Schedule: - Efficiency Policy Name: auto Efficiency Policy UUID: 9dc35dc6-94c7-11ee-a4d2-3159877d4601 Optimization: space-saving Min Blocks Shared: 1 Blocks Skipped Sharing: 0 Last Operation State: Success Last Success Operation Begin: Thu Dec 07 07:55:56 2023 Last Success Operation End: Thu Dec 07 07:58:01 2023 Last Operation Begin: Thu Dec 07 07:55:56 2023 Last Operation End: Thu Dec 07 07:58:01 2023 Last Operation Size: 1.67GB Last Operation Error: - Operation Frequency: Once approxmiately every 43 min(s) and 7 sec(s) Changelog Usage: 4% Changelog Size: 7.58MB Vault transfer log Size: 0B Compression Changelog Size: 0B Changelog Overflow: 0B Logical Data Size: 8.97GB Logical Data Limit: 1.25PB Logical Data Percent: 0% Queued Job: - Stale Fingerprint Percentage: 0 Stage: Done Checkpoint Time: No Checkpoint Checkpoint Operation Type: - Checkpoint Stage: - Checkpoint Substage: - Checkpoint Progress: - Fingerprints Gathered: 0 Blocks Processed For Compression: 0 Gathering Begin: - Gathering Phase 2 Begin: - Fingerprints Sorted: 738703 Duplicate Blocks Found: 113154 Sorting Begin: Thu Dec 7 07:55:56 UTC 2023 Blocks Deduplicated: 113153 Blocks Snapshot Crunched: 0 De-duping Begin: Thu Dec 7 07:56:03 UTC 2023 Fingerprints Deleted: 0 Checking Begin: - Compression: false Inline Compression: true Application IO Size: auto Compression Type: adaptive Storage Efficiency Mode: efficient Verify Trigger Rate: 20 Total Verify Time: 00:00:00 Verify Suspend Count: - Constituent Volume: false Total Sorted Blocks: 738703 Same FP Count: 113154 Same FBN: 0 Same Data: 113153 No Op: 0 Same VBN: 0 Mismatched Data: 0 Same Sharing Records: 0 Max RefCount Hits: 0 Stale Recipient Count: 0 Stale Donor Count: 0 VBN Absent Count: 0 Num Out Of Space: 0 Mismatch Due To Overwrites: 0 Stale Auxiliary Recipient Count: 0 Stale Auxiliary Recipient Block Count: 0 Mismatched Recipient Block Pointers: 0 Unattempted Auxiliary Recipient Share: 0 Skip Share Blocks Delta: 0 Skip Share Blocks Upper: 0 Inline Dedupe: true Data Compaction: true Cross Volume Inline Deduplication: false Compression Algorithm: lzopro Cross Volume Background Deduplication: false Extended Compressed Data: true Volume has auto adaptive compression savings: true Volume doing auto adaptive compression: true auto adaptive compression on existing volume: false File compression application IO Size: - Compression Algorithm List: lzopro Compression Begin Time: - Number of L1s processed by compression phase: 0 Number of indirect blocks skipped by compression phase: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Volume Has Extended Auto Adaptive Compression: true
手動でポストプロセス重複排除を実行
-scan-old-data の指定なし
次に、手動でポストプロセス重複排除を実行してみます。
::*> volume efficiency start -volume vol1 The efficiency operation for volume "vol1" of Vserver "fsx" has started.
実行後のLast Operation Sizeを確認すると、776.6MBでした。今まで投入されたデータ全てに対して処理した訳ではないことが分かります。
::*> volume efficiency show -volume vol1 -instance Vserver Name: fsx Volume Name: vol1 Volume Path: /vol/vol1 State: Enabled Auto State: Auto Status: Idle Progress: Idle for 00:00:05 Type: Regular Schedule: - Efficiency Policy Name: auto Efficiency Policy UUID: 9dc35dc6-94c7-11ee-a4d2-3159877d4601 Optimization: space-saving Min Blocks Shared: 1 Blocks Skipped Sharing: 0 Last Operation State: Success Last Success Operation Begin: Thu Dec 07 08:04:23 2023 Last Success Operation End: Thu Dec 07 08:04:24 2023 Last Operation Begin: Thu Dec 07 08:04:23 2023 Last Operation End: Thu Dec 07 08:04:24 2023 Last Operation Size: 776.6MB Last Operation Error: - Operation Frequency: Once approxmiately every 30 min(s) and 14 sec(s) Changelog Usage: 0% Changelog Size: 0B Vault transfer log Size: 0B Compression Changelog Size: 0B Changelog Overflow: 0B Logical Data Size: 8.97GB Logical Data Limit: 1.25PB Logical Data Percent: 0% Queued Job: - Stale Fingerprint Percentage: 0 Stage: Done Checkpoint Time: No Checkpoint Checkpoint Operation Type: - Checkpoint Stage: - Checkpoint Substage: - Checkpoint Progress: - Fingerprints Gathered: 0 Blocks Processed For Compression: 0 Gathering Begin: - Gathering Phase 2 Begin: - Fingerprints Sorted: 824354 Duplicate Blocks Found: 0 Sorting Begin: Thu Dec 7 08:04:23 UTC 2023 Blocks Deduplicated: 0 Blocks Snapshot Crunched: 0 De-duping Begin: Thu Dec 7 08:04:23 UTC 2023 Fingerprints Deleted: 0 Checking Begin: - Compression: false Inline Compression: true Application IO Size: auto Compression Type: adaptive Storage Efficiency Mode: efficient Verify Trigger Rate: 20 Total Verify Time: 00:00:00 Verify Suspend Count: - Constituent Volume: false Total Sorted Blocks: 824354 Same FP Count: 0 Same FBN: 0 Same Data: 0 No Op: 0 Same VBN: 0 Mismatched Data: 0 Same Sharing Records: 0 Max RefCount Hits: 0 Stale Recipient Count: 0 Stale Donor Count: 0 VBN Absent Count: 0 Num Out Of Space: 0 Mismatch Due To Overwrites: 0 Stale Auxiliary Recipient Count: 0 Stale Auxiliary Recipient Block Count: 0 Mismatched Recipient Block Pointers: 0 Unattempted Auxiliary Recipient Share: 0 Skip Share Blocks Delta: 0 Skip Share Blocks Upper: 0 Inline Dedupe: true Data Compaction: true Cross Volume Inline Deduplication: false Compression Algorithm: lzopro Cross Volume Background Deduplication: false Extended Compressed Data: true Volume has auto adaptive compression savings: true Volume doing auto adaptive compression: true auto adaptive compression on existing volume: false File compression application IO Size: - Compression Algorithm List: lzopro Compression Begin Time: - Number of L1s processed by compression phase: 0 Number of indirect blocks skipped by compression phase: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Volume Has Extended Auto Adaptive Compression: true
もう少し確認してみましょう。
1GiBのテストファイルを作成します。
$ sudo dd if=/dev/urandom of=/mnt/fsxn/vol1/test_file_3 bs=1M count=1024 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 6.34989 s, 169 MB/s
ファイル追加後のChange logは以下のとおりです。
::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress vserver volume state progress changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- --------------- -------------- ----------------- fsx vol1 Enabled Idle for 00:17:46 6% 9.96MB 9.95GB
再度手動でポストプロセス重複排除を行います。
::*> volume efficiency start -volume vol1 The efficiency operation for volume "vol1" of Vserver "fsx" has started.
実行後、Storage Efficiencyの状態を確認します。
::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress vserver volume state progress changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- --------------- -------------- ----------------- fsx vol1 Enabled Idle for 00:00:16 0% 0B 9.96GB ::*> volume efficiency show -instance Vserver Name: fsx Volume Name: vol1 Volume Path: /vol/vol1 State: Enabled Auto State: Auto Status: Idle Progress: Idle for 00:00:40 Type: Regular Schedule: - Efficiency Policy Name: auto Efficiency Policy UUID: 9dc35dc6-94c7-11ee-a4d2-3159877d4601 Optimization: space-saving Min Blocks Shared: 1 Blocks Skipped Sharing: 0 Last Operation State: Success Last Success Operation Begin: Thu Dec 07 08:22:35 2023 Last Success Operation End: Thu Dec 07 08:22:36 2023 Last Operation Begin: Thu Dec 07 08:22:35 2023 Last Operation End: Thu Dec 07 08:22:36 2023 Last Operation Size: 1GB Last Operation Error: - Operation Frequency: Once approxmiately every 27 min(s) and 22 sec(s) Changelog Usage: 0% Changelog Size: 0B Vault transfer log Size: 0B Compression Changelog Size: 0B Changelog Overflow: 0B Logical Data Size: 9.96GB Logical Data Limit: 1.25PB Logical Data Percent: 0% Queued Job: - Stale Fingerprint Percentage: 0 Stage: Done Checkpoint Time: No Checkpoint Checkpoint Operation Type: - Checkpoint Stage: - Checkpoint Substage: - Checkpoint Progress: - Fingerprints Gathered: 0 Blocks Processed For Compression: 0 Gathering Begin: - Gathering Phase 2 Begin: - Fingerprints Sorted: 1086497 Duplicate Blocks Found: 1 Sorting Begin: Thu Dec 7 08:22:35 UTC 2023 Blocks Deduplicated: 0 Blocks Snapshot Crunched: 0 De-duping Begin: Thu Dec 7 08:22:35 UTC 2023 Fingerprints Deleted: 0 Checking Begin: - Compression: false Inline Compression: true Application IO Size: auto Compression Type: adaptive Storage Efficiency Mode: efficient Verify Trigger Rate: 20 Total Verify Time: 00:00:00 Verify Suspend Count: - Constituent Volume: false Total Sorted Blocks: 1086497 Same FP Count: 1 Same FBN: 0 Same Data: 0 No Op: 0 Same VBN: 0 Mismatched Data: 1 Same Sharing Records: 0 Max RefCount Hits: 0 Stale Recipient Count: 0 Stale Donor Count: 0 VBN Absent Count: 0 Num Out Of Space: 0 Mismatch Due To Overwrites: 0 Stale Auxiliary Recipient Count: 0 Stale Auxiliary Recipient Block Count: 0 Mismatched Recipient Block Pointers: 0 Unattempted Auxiliary Recipient Share: 0 Skip Share Blocks Delta: 0 Skip Share Blocks Upper: 0 Inline Dedupe: true Data Compaction: true Cross Volume Inline Deduplication: false Compression Algorithm: lzopro Cross Volume Background Deduplication: false Extended Compressed Data: true Volume has auto adaptive compression savings: true Volume doing auto adaptive compression: true auto adaptive compression on existing volume: false File compression application IO Size: - Compression Algorithm List: lzopro Compression Begin Time: - Number of L1s processed by compression phase: - Number of indirect blocks skipped by compression phase: - Volume Has Extended Auto Adaptive Compression: true
Last Operation Sizeは1GBでした。つまりは通常のポストプロセス重複排除だと、Change logで保持しているデータブロック分しか処理しないことが分かります。
-scan-old-data の指定あり
では、過去のデータを改めてポストプロセス重複排除したい場合は、どのようにしたら良いのでしょうか。
そのような場合は-scan-old-data
を指定することで実現可能です。
[-o, -scan-all
] - Scan all the data without shared block optimization(if scanning old data) Scans the entire file system and processes the shared blocks also. You may be able to achieve additional space savings using this option. Where as, by default the option –scan-old-data saves some time by skipping the shared blocks.
実際に実行してみます。
# 実行前のボリュームの物理データ使用量や重複排除量などの情報の確認 ::*> volume show -volume vol1 -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- ---- --------- --------------- ------- ------ ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- fsx vol1 16GB 0 10.75GB 16GB 15.20GB 4.45GB 29% 5.51GB 55% 5.51GB 55% 2.18GB 0B 0% 9.96GB 66% - 9.96GB 0B 0% # ポストプロセス重複排除の実行 ::*> volume efficiency start -volume vol1 -scan-old-data Warning: This operation scans all of the data in volume "vol1" of Vserver "fsx". It might take a significant time, and degrade performance during that time. Do you want to continue? {y|n}: y The efficiency operation for volume "vol1" of Vserver "fsx" has started.
「ボリューム内の全てのデータをスキャンするため時間もかかり、パフォーマンスが低下する可能性がある」という警告が表示されました。既に稼働中の本番ワークロードで実行する際は気をつけましょう。
実行した際のStorage Efficiencyの情報は以下のとおりです。4.17GBとボリューム内のほぼ全てに対してスキャンしたことが分かります。
::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-size vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------ --------------- -------------- ----------------- fsx vol1 Enabled 851972 KB Scanned 0B 0% 0B 9.96GB ::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-size vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------ --------------- -------------- ----------------- fsx vol1 Enabled Idle for 00:00:06 4.17GB 0% 0B 9.98GB ::*> volume efficiency show -volume vol1 -instance Vserver Name: fsx Volume Name: vol1 Volume Path: /vol/vol1 State: Enabled Auto State: Auto Status: Idle Progress: Idle for 00:00:59 Type: Regular Schedule: - Efficiency Policy Name: auto Efficiency Policy UUID: 9dc35dc6-94c7-11ee-a4d2-3159877d4601 Optimization: space-saving Min Blocks Shared: 1 Blocks Skipped Sharing: 0 Last Operation State: Success Last Success Operation Begin: Thu Dec 07 08:28:01 2023 Last Success Operation End: Thu Dec 07 08:28:24 2023 Last Operation Begin: Thu Dec 07 08:28:01 2023 Last Operation End: Thu Dec 07 08:28:24 2023 Last Operation Size: 4.17GB Last Operation Error: - Operation Frequency: Once approxmiately every 19 min(s) and 16 sec(s) Changelog Usage: 0% Changelog Size: 0B Vault transfer log Size: 0B Compression Changelog Size: 0B Changelog Overflow: 0B Logical Data Size: 9.98GB Logical Data Limit: 1.25PB Logical Data Percent: 0% Queued Job: - Stale Fingerprint Percentage: 0 Stage: Done Checkpoint Time: No Checkpoint Checkpoint Operation Type: - Checkpoint Stage: - Checkpoint Substage: - Checkpoint Progress: - Fingerprints Gathered: 1092956 Blocks Processed For Compression: 0 Gathering Begin: Thu Dec 7 08:28:01 UTC 2023 Gathering Phase 2 Begin: Thu Dec 7 08:28:19 UTC 2023 Fingerprints Sorted: 1092956 Duplicate Blocks Found: 6456 Sorting Begin: Thu Dec 7 08:28:19 UTC 2023 Blocks Deduplicated: 6421 Blocks Snapshot Crunched: 0 De-duping Begin: Thu Dec 7 08:28:22 UTC 2023 Fingerprints Deleted: 0 Checking Begin: - Compression: false Inline Compression: true Application IO Size: auto Compression Type: adaptive Storage Efficiency Mode: efficient Verify Trigger Rate: 20 Total Verify Time: 00:00:00 Verify Suspend Count: - Constituent Volume: false Total Sorted Blocks: 1092956 Same FP Count: 6456 Same FBN: 0 Same Data: 6421 No Op: 0 Same VBN: 31 Mismatched Data: 1 Same Sharing Records: 0 Max RefCount Hits: 0 Stale Recipient Count: 0 Stale Donor Count: 0 VBN Absent Count: 0 Num Out Of Space: 0 Mismatch Due To Overwrites: 0 Stale Auxiliary Recipient Count: 0 Stale Auxiliary Recipient Block Count: 0 Mismatched Recipient Block Pointers: 0 Unattempted Auxiliary Recipient Share: 0 Skip Share Blocks Delta: 0 Skip Share Blocks Upper: 0 Inline Dedupe: true Data Compaction: true Cross Volume Inline Deduplication: false Compression Algorithm: lzopro Cross Volume Background Deduplication: false Extended Compressed Data: true Volume has auto adaptive compression savings: true Volume doing auto adaptive compression: true auto adaptive compression on existing volume: false File compression application IO Size: - Compression Algorithm List: lzopro Compression Begin Time: - Number of L1s processed by compression phase: 0 Number of indirect blocks skipped by compression phase: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Volume Has Extended Auto Adaptive Compression: true
なお、追加で重複するようなデータを投入していなかったので、重複排除量は変わりありませんでした。
::*> volume show -volume vol1 -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- ---- --------- --------------- ------- ------ ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- fsx vol1 16GB 0 10.73GB 16GB 15.20GB 4.47GB 29% 5.51GB 55% 5.51GB 55% 2.18GB 0B 0% 9.98GB 66% - 9.98GB 0B 0%
ちなみに、ポストプロセス重複排除を実行するタイミング(8:28)でディスクスループットやディスクIOPS、CPU使用率が大きく増加していました。
このことから全てのデータをスキャンしていることが分かります。
やはり、本番ワークロード上で-scan-old-data
を付与してポストプロセス重複排除を実行するのは避けた方が良さそうですね。
以前のポストプロセス重複排除でスキャン済みのデータブロックとの重複排除
ここでふと、以前のポストプロセス重複排除でスキャン済みのデータブロックとの重複排除が効くのか気になりました。
ポストプロセス重複排除がかかると、Change logサイズが0にリセットされます。
このときのChange logの扱いが気になります。内部的にはChange logを保持しているのでしょうか。
ポストプロセス重複排除が実行されたタイミングで保持しているChange log間で重複しているデータブロックを探すのか、それとも内部的には過去のChange logも保持しており、そこからも重複しているデータブロックを探しにいくのでしょうか。
もし、前者なのであれば、あまりに頻繁にポストプロセス重複排除が実行されると、重複排除率は実際のデータブロック重複率よりも少ない割合になりそうです。
一方、後者なのであれば、-scan-old-data
オプションを付与しているのと同様の動きになってしまうような気がしています。
実際に動作確認してみましょう。
まず、/usr
をFSxNのボリュームにコピーします。
$ sudo cp -pr /usr /mnt/fsxn/vol1/usr5
コピー後のボリュームの情報は以下のとおりです。Change logサイズが0から3.34MBに増えましたね。
::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-size vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------ --------------- -------------- ----------------- fsx vol1 Enabled Idle for 00:06:41 4.17GB 2% 3.34MB 11.46GB ::*> volume show -volume vol1 -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- ---- --------- --------------- ------- ------ ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- fsx vol1 16GB 0 10.33GB 16GB 15.20GB 4.87GB 32% 6.59GB 58% 6.59GB 58% 2.18GB 0B 0% 11.46GB 75% - 11.46GB 0B 0%
この状態でポストプロセス重複排除を行います。
::*> volume efficiency start -volume vol1 The efficiency operation for volume "vol1" of Vserver "fsx" has started. ::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-size vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- -------------------- ------------ --------------- -------------- ----------------- fsx vol1 Enabled 217124 KB (61%) Done 4.17GB 0% 3.38MB 11.46GB ::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-size vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- -------------------- ------------ --------------- -------------- ----------------- fsx vol1 Enabled 341184 KB (96%) Done 4.17GB 0% 3.38MB 11.47GB ::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-size vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------ --------------- -------------- ----------------- fsx vol1 Enabled Idle for 00:00:03 345.6MB 0% 0B 11.46GB
ボリュームの情報を確認すると、重複排除量が6.59GBから6.93GBに増えていました。
::*> volume show -volume vol1 -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- ---- --------- --------------- ------- ------ ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- fsx vol1 16GB 0 10.67GB 16GB 15.20GB 4.53GB 29% 6.93GB 60% 6.93GB 60% 2.18GB 0B 0% 11.46GB 75% - 11.46GB 0B 0%
-scan-old-data
を付与してポストプロセス重複排除を実行します。
::*> volume efficiency start -volume vol1 -scan-old-data Warning: This operation scans all of the data in volume "vol1" of Vserver "fsx". It might take a significant time, and degrade performance during that time. Do you want to continue? {y|n}: y The efficiency operation for volume "vol1" of Vserver "fsx" has started. ::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-size vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------ --------------- -------------- ----------------- fsx vol1 Enabled 851976 KB Scanned 345.6MB 0% 0B 11.40GB ::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-size vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ------------------ ------------ --------------- -------------- ----------------- fsx vol1 Enabled 3833864 KB Scanned 345.6MB 0% 0B 11.40GB ::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-size vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ------------------ ------------ --------------- -------------- ----------------- fsx vol1 Enabled 4259848 KB Scanned 345.6MB 0% 0B 11.40GB ::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-size vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------ --------------- -------------- ----------------- fsx vol1 Enabled Idle for 00:00:01 4.17GB 0% 0B 11.43GB
重複排除率に大きな変化はありせんでした。
::*> volume show -volume vol1 -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- ---- --------- --------------- ------- ------ ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- fsx vol1 16GB 0 10.69GB 16GB 15.20GB 4.51GB 29% 6.91GB 61% 6.91GB 61% 2.17GB 0B 0% 11.43GB 75% - 11.43GB 0B 0%
-scan-old-data
を付与しないにも関わらず、重複排除量が増えたことから、「以前のポストプロセス重複排除でスキャン済みのデータブロックとの重複排除してくれそうだ」と一瞬思いました。
しかし、コピーした/usr
内で重複しているデータがあることも十分考えられます。
別パターンで試してみます。
まず、ボリュームのサイズを64GBに拡張します。
::*> volume modify -volume vol1 -size 64GB Volume modify successful on volume vol1 of Vserver fsx. ::*> volume show -volume vol1 -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- ---- --------- --------------- ------- ------ ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- fsx vol1 64GB 0 56.29GB 64GB 60.80GB 4.51GB 7% 6.91GB 61% 6.91GB 61% 2.17GB 0B 0% 11.43GB 19% - 11.43GB 0B 0%
そして、4GiBのファイルを作成します。
$ sudo dd if=/dev/urandom of=/mnt/fsxn/vol1/test_file_4 bs=1M count=4096 4096+0 records in 4096+0 records out 4294967296 bytes (4.3 GB, 4.0 GiB) copied, 28.4229 s, 151 MB/s
ボリュームの情報は以下のとおりです。
::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-size vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------ --------------- -------------- ----------------- fsx vol1 Enabled Idle for 00:15:51 4.17GB 6% 39.96MB 15.48GB ::*> ::*> volume show -volume vol1 -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- ---- --------- --------------- ------- ------ ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- fsx vol1 64GB 0 52.23GB 64GB 60.80GB 8.57GB 14% 6.91GB 45% 6.91GB 45% 2.17GB 0B 0% 15.48GB 25% - 15.48GB 0B 0%
このファイルをコピーする際に、インライン重複排除が効かないようにしたいです。
インライン重複排除はメモリバッファキャッシュに残っている直近書き込みされたデータを利用していそうであることを先の検証で確認しました。
今回の検証で使用しているFSxNファイルシステムのキャパシティプールストレージは128MBpsです。
::*> node virtual-machine instance show-settings -node FsxId04076278992c2097a-01 Node: FsxId04076278992c2097a-01 Provider VM Name: - Consumer of this Instance: FSx Storage Type for the Instance: SSD Storage Capacity for the Instance in GB: 1024 Committed IOPS per GB for the Instance: 3072 Maximum Throughput Capacity in MB/s for the Instance: 128 Total Network Bandwidth Limit in MB/s: - Total Volume Bandwidth Limit in MB/s: -
スループットキャパシティが128MBpsのFSxNファイルシステムのインメモリキャッシュは16GBです。
FSxNファイルシステムのインメモリキャッシュサイズ = インライン重複排除で使うメモリバッファキャッシュサイズかは不明です。ただ、目安にはなると思います。
今回は16GiBのファイルを作成してキャッシュを先ほど書き込んだファイルのデータを上書きします。
$ sudo dd if=/dev/urandom of=/mnt/fsxn/vol1/test_file_5 bs=1M count=16384 16384+0 records in 16384+0 records out 17179869184 bytes (17 GB, 16 GiB) copied, 116.595 s, 147 MB/s
この時のボリュームの情報は以下のとおりです。ポストプロセス重複排除がかかったようです。
::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-size vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ------------- ------------ --------------- -------------- ----------------- fsx vol1 Enabled 0 KB Searched 4.17GB 14% 164.8MB 28.51GB ::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-size vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ------------------- ------------ --------------- -------------- ----------------- fsx vol1 Enabled 1123776 KB Searched 4.17GB 18% 187.5MB 30.80GB ::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-size vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------ --------------- -------------- ----------------- fsx vol1 Enabled Idle for 00:00:33 6.76GB 20% 132.3MB 31.78GB
念の為もう一度16GiBのファイルを作成します。
$ sudo dd if=/dev/urandom of=/mnt/fsxn/vol1/test_file_5 bs=1M count=16384 16384+0 records in 16384+0 records out 17179869184 bytes (17 GB, 16 GiB) copied, 117.091 s, 147 MB/s
この時のボリュームの情報は以下のとおりです。
::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-size vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ------------- ------------ --------------- -------------- ----------------- fsx vol1 Enabled 0 KB Searched 6.76GB 8% 185.8MB 21.27GB ::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-size vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------ --------------- -------------- ----------------- fsx vol1 Enabled Idle for 00:00:09 13.24GB 24% 160.0MB 31.97GB ::*> volume show -volume vol1 -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- ---- --------- --------------- ------- ------- ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- fsx vol1 64GB 0 35.87GB 64GB 60.80GB 24.93GB 41% 7.04GB 22% 7.04GB 22% 2.30GB 0B 0% 31.97GB 53% - 31.97GB 0B 0%
Change logサイズをリセットしたいので、ポストプロセス重複排除をします。
::*> volume efficiency start -volume vol1 The efficiency operation for volume "vol1" of Vserver "fsx" has started. ::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-size vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ------------- ------------ --------------- -------------- ----------------- fsx vol1 Enabled 0 KB Searched 13.24GB 0% 160.0MB 31.97GB ::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-size vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- -------------------- ------------ --------------- -------------- ----------------- fsx vol1 Enabled 42094712 KB Searched 13.24GB 0% 160.0MB 31.97GB ::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-size vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ------------- ------------ --------------- -------------- ----------------- fsx vol1 Enabled 0 KB Verified 13.24GB 0% 0B 32.48GB ::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-size vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- -------------------- ------------ --------------- -------------- ----------------- fsx vol1 Enabled 17878200 KB Verified 13.24GB 0% 0B 31.92GB ::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-size vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ---------- ------------ --------------- -------------- ----------------- fsx vol1 Enabled 28% Merged 13.24GB 0% 0B 32.02GB ::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-size vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------ --------------- -------------- ----------------- fsx vol1 Enabled Idle for 00:00:09 16.00GB 0% 0B 32.02GB ::*> volume show -volume vol1 -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- ---- --------- --------------- ------- ------- ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- fsx vol1 64GB 0 35.73GB 64GB 60.80GB 25.07GB 41% 6.91GB 22% 6.91GB 22% 2.17GB 0B 0% 31.99GB 53% - 31.99GB 0B 0%
この状態で、テストファイルをコピーします。
$ sudo cp /mnt/fsxn/vol1/test_file_4 /mnt/fsxn/vol1/test_file_4_copy
ボリュームの情報は以下のとおりです。ファイルサイズの4GiB全てインライン重複排除されていそうです。
::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-size vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------ --------------- -------------- ----------------- fsx vol1 Enabled Idle for 00:01:14 16.00GB 0% 2.11MB 33.06GB ::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-size vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------ --------------- -------------- ----------------- fsx vol1 Enabled Idle for 00:01:54 16.00GB 1% 7.32MB 35.74GB ::*> volume show -volume vol1 -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- ---- --------- --------------- ------- ------- ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- fsx vol1 64GB 0 35.24GB 64GB 60.80GB 25.56GB 42% 10.18GB 28% 10.18GB 28% 5.43GB 0B 0% 35.74GB 59% - 35.74GB 0B 0%
コピーする際のデータブロックの読み取るタイミングでメモリに保持され、インライン重複排除が効いていそうな気がします。
これは再度検証が必要です。
一度、手動でポストプロセス圧縮をします。
::*> volume efficiency start -volume vol1 The efficiency operation for volume "vol1" of Vserver "fsx" has started. ::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-size vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- -------------------- ------------ --------------- -------------- ----------------- fsx vol1 Enabled 10849708 KB Searched 16.00GB 0% 7.38MB 35.74GB ::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-size vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- -------------------- ------------ --------------- -------------- ----------------- fsx vol1 Enabled 417148 KB (53%) Done 16.00GB 0% 7.38MB 35.74GB ::*> ::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-size vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------ --------------- -------------- ----------------- fsx vol1 Enabled Idle for 00:00:39 756.0MB 0% 0B 35.74GB ::*> volume show -volume vol1 -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- ---- --------- --------------- ------- ------- ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- fsx vol1 64GB 0 35.98GB 64GB 60.80GB 24.82GB 40% 10.92GB 31% 10.92GB 31% 6.18GB 0B 0% 35.74GB 59% - 35.74GB 0B 0%
若干重複排除量が増加しました。
-scan-old-data
を指定してポストプロセス重複排除してみます。
::*> volume efficiency start -volume vol1 -scan-old-data Warning: This operation scans all of the data in volume "vol1" of Vserver "fsx". It might take a significant time, and degrade performance during that time. Do you want to continue? {y|n}: y The efficiency operation for volume "vol1" of Vserver "fsx" has started. ::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-size vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------ --------------- -------------- ----------------- fsx vol1 Enabled 851972 KB Scanned 756.0MB 0% 0B 35.75GB ::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-size vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ------------------- ------------ --------------- -------------- ----------------- fsx vol1 Enabled 22949896 KB Scanned 756.0MB 0% 0B 35.75GB ::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-size vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------ --------------- -------------- ----------------- fsx vol1 Enabled Idle for 00:00:03 24.17GB 0% 0B 35.89GB ::*> volume show -volume vol1 -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- ---- --------- --------------- ------- ------- ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- fsx vol1 64GB 0 35.83GB 64GB 60.80GB 24.97GB 41% 10.92GB 30% 10.92GB 30% 6.18GB 0B 0% 35.89GB 59% - 35.89GB 0B 0% ::*> volume efficiency show -volume vol1 -instance Vserver Name: fsx Volume Name: vol1 Volume Path: /vol/vol1 State: Enabled Auto State: Auto Status: Idle Progress: Idle for 00:00:59 Type: Regular Schedule: - Efficiency Policy Name: auto Efficiency Policy UUID: 9dc35dc6-94c7-11ee-a4d2-3159877d4601 Optimization: space-saving Min Blocks Shared: 1 Blocks Skipped Sharing: 0 Last Operation State: Success Last Success Operation Begin: Thu Dec 07 09:10:16 2023 Last Success Operation End: Thu Dec 07 09:11:54 2023 Last Operation Begin: Thu Dec 07 09:10:16 2023 Last Operation End: Thu Dec 07 09:11:54 2023 Last Operation Size: 24.17GB Last Operation Error: - Operation Frequency: Once approxmiately every 12 min(s) and 7 sec(s) Changelog Usage: 0% Changelog Size: 0B Vault transfer log Size: 0B Compression Changelog Size: 0B Changelog Overflow: 0B Logical Data Size: 35.89GB Logical Data Limit: 1.25PB Logical Data Percent: 0% Queued Job: - Stale Fingerprint Percentage: 0 Stage: Done Checkpoint Time: No Checkpoint Checkpoint Operation Type: - Checkpoint Stage: - Checkpoint Substage: - Checkpoint Progress: - Fingerprints Gathered: 6335818 Blocks Processed For Compression: 0 Gathering Begin: Thu Dec 7 09:10:16 UTC 2023 Gathering Phase 2 Begin: Thu Dec 7 09:11:37 UTC 2023 Fingerprints Sorted: 6335818 Duplicate Blocks Found: 6448 Sorting Begin: Thu Dec 7 09:11:38 UTC 2023 Blocks Deduplicated: 6403 Blocks Snapshot Crunched: 0 De-duping Begin: Thu Dec 7 09:11:48 UTC 2023 Fingerprints Deleted: 0 Checking Begin: - Compression: false Inline Compression: true Application IO Size: auto Compression Type: adaptive Storage Efficiency Mode: efficient Verify Trigger Rate: 20 Total Verify Time: 00:00:20 Verify Suspend Count: - Constituent Volume: false Total Sorted Blocks: 6335818 Same FP Count: 6448 Same FBN: 0 Same Data: 6403 No Op: 0 Same VBN: 31 Mismatched Data: 11 Same Sharing Records: 0 Max RefCount Hits: 0 Stale Recipient Count: 0 Stale Donor Count: 0 VBN Absent Count: 0 Num Out Of Space: 0 Mismatch Due To Overwrites: 0 Stale Auxiliary Recipient Count: 0 Stale Auxiliary Recipient Block Count: 0 Mismatched Recipient Block Pointers: 0 Unattempted Auxiliary Recipient Share: 0 Skip Share Blocks Delta: 0 Skip Share Blocks Upper: 0 Inline Dedupe: true Data Compaction: true Cross Volume Inline Deduplication: false Compression Algorithm: lzopro Cross Volume Background Deduplication: false Extended Compressed Data: true Volume has auto adaptive compression savings: true Volume doing auto adaptive compression: true auto adaptive compression on existing volume: false File compression application IO Size: - Compression Algorithm List: lzopro Compression Begin Time: - Number of L1s processed by compression phase: 0 Number of indirect blocks skipped by compression phase: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Volume Has Extended Auto Adaptive Compression: true
やはり、変わりありませんね。
一連の作業時のディスクスループットやディスクIOPS、CPU使用率は以下のとおりです。
以前のポストプロセス重複排除でスキャン済みのデータブロックとの重複排除 (再トライ)
再トライです。
まず、テスト用ファイルをローカルにコピーして、ボリュームにファイルをコピーる際にインライン重複排除が効かないようにします。
$ sudo cp /mnt/fsxn/vol1/test_file_4 /home/ec2-user/test_file_4
そして、16GiBのファイルを2つ作成してメモリバッファキャッシュの内容を上書きします。
$ sudo dd if=/dev/urandom of=/mnt/fsxn/vol1/test_file_5 bs=1M count=16384 16384+0 records in 16384+0 records out 17179869184 bytes (17 GB, 16 GiB) copied, 116.5 s, 147 MB/s $ sudo dd if=/dev/urandom of=/mnt/fsxn/vol1/test_file_5 bs=1M count=16384 16384+0 records in 16384+0 records out 17179869184 bytes (17 GB, 16 GiB) copied, 116.696 s, 147 MB/s
Change logやボリュームの状態は以下のとおりです。32GBも書き込んだのでポストプロセス重複排除が実行されChange logがリセットされています。
::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-size vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------ --------------- -------------- ----------------- fsx vol1 Enabled Idle for 00:00:21 32GB 0% 0B 35.75GB ::*> volume show -volume vol1 -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- ---- --------- --------------- ------- ------- ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- fsx vol1 64GB 0 22.68GB 64GB 60.80GB 38.12GB 62% 10.91GB 22% 10.91GB 22% 6.17GB 0B 0% 49.04GB 81% - 35.75GB 0B 0%
ローカルのファイルをFSxNのボリュームにコピーします。
$ sudo cp /home/ec2-user/test_file_4 /mnt/fsxn/vol1/test_file_4_copy2
Change logやボリュームの状態は以下のとおりです。100MBほど重複排除量が増えていますが微々たるものですね。
::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-size vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------ --------------- -------------- ----------------- fsx vol1 Enabled Idle for 00:08:13 32GB 5% 38.44MB 39.81GB ::*> volume show -volume vol1 -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- ---- --------- --------------- ------- ------- ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- fsx vol1 64GB 0 18.73GB 64GB 60.80GB 42.07GB 69% 11.02GB 21% 11.02GB 21% 6.17GB 0B 0% 53.09GB 87% - 39.81GB 0B 0%
それでは-scan-old-data
オプションを付与せずにポストプロセス重複排除を実行します。
FsxId04076278992c2097a::*> volume efficiency start -volume vol1 The efficiency operation for volume "vol1" of Vserver "fsx" has started. FsxId04076278992c2097a::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-size vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- -------------------- ------------ --------------- -------------- ----------------- fsx vol1 Enabled 41348556 KB Searched 32GB 0% 38.93MB 39.81GB FsxId04076278992c2097a::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-size vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- -------------- ------------ --------------- -------------- ----------------- fsx vol1 Enabled 0 KB (0%) Done 32GB 0% 38.93MB 39.81GB FsxId04076278992c2097a::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-size vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ------------------- ------------ --------------- -------------- ----------------- fsx vol1 Enabled 302532 KB (7%) Done 32GB 0% 38.93MB 39.81GB FsxId04076278992c2097a::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-size vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- --------------------- ------------ --------------- -------------- ----------------- fsx vol1 Enabled 2429332 KB (59%) Done 32GB 0% 38.93MB 39.81GB FsxId04076278992c2097a::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-size vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ------------------------------------------------- ------------ --------------- -------------- ----------------- fsx vol1 Enabled Inode:21548 of 293552, curr_fbn: 9 of max_fbn: 8 32GB 0% 38.93MB 39.81GB FsxId04076278992c2097a::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-size vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- -------------------------------------------------- ------------ --------------- -------------- ----------------- fsx vol1 Enabled Inode:128813 of 293552, curr_fbn: 1 of max_fbn: 0 32GB 0% 38.93MB 39.81GB FsxId04076278992c2097a::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-size vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- -------------------------------------------------------------- ------------ --------------- -------------- ----------------- fsx vol1 Enabled Inode:236916 of 293552, curr_fbn: 2210850 of max_fbn: 4194303 32GB 0% 38.93MB 39.81GB FsxId04076278992c2097a::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-size vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------ --------------- -------------- ----------------- fsx vol1 Enabled Idle for 00:00:00 3.89GB 0% 0B 39.85GB
重複排除量を確認すると、14.95GBになっていました。以前のポストプロセス重複排除でスキャン済みのデータブロックとの重複排除が効いていそうです。
::*> volume show -volume vol1 -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- ---- --------- --------------- ------- ------- ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- fsx vol1 64GB 0 22.62GB 64GB 60.80GB 38.18GB 62% 14.95GB 28% 14.95GB 28% 6.21GB 0B 0% 53.13GB 87% - 39.85GB 0B 0%
たまたまかもしれないので、新規ファイルを用意して再度試します。
$ sudo dd if=/dev/urandom of=/mnt/fsxn/vol1/test_file_6 bs=1M count=4096 4096+0 records in 4096+0 records out 4294967296 bytes (4.3 GB, 4.0 GiB) copied, 28.4588 s, 151 MB/s $ sudo cp /mnt/fsxn/vol1/test_file_6 /home/ec2-user/test_file_6 sh-5.2$ sudo dd if=/dev/urandom of=/mnt/fsxn/vol1/test_file_5 bs=1M count=16384 16384+0 records in 16384+0 records out 17179869184 bytes (17 GB, 16 GiB) copied, 116.891 s, 147 MB/s sh-5.2$ sudo dd if=/dev/urandom of=/mnt/fsxn/vol1/test_file_5 bs=1M count=16384 16384+0 records in 16384+0 records out 17179869184 bytes (17 GB, 16 GiB) copied, 116.68 s, 147 MB/s sh-5.2$ sudo dd if=/dev/urandom of=/mnt/fsxn/vol1/test_file_5 bs=1M count=16384 16384+0 records in 16384+0 records out 17179869184 bytes (17 GB, 16 GiB) copied, 116.748 s, 147 MB/s sh-5.2$ sudo dd if=/dev/urandom of=/mnt/fsxn/vol1/test_file_5 bs=1M count=16384 16384+0 records in 16384+0 records out 17179869184 bytes (17 GB, 16 GiB) copied, 116.621 s, 147 MB/s
一度ポストプロセス重複排除をしてChange logをリセットします。
::*> volume efficiency start -volume vol1 The efficiency operation for volume "vol1" of Vserver "fsx" has started. ::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-size vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------ --------------- -------------- ----------------- fsx vol1 Enabled Idle for 00:00:06 29.39GB 0% 0B 44.40GB ::*> volume show -volume vol1 -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- ---- --------- --------------- ------- ------- ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- fsx vol1 64GB 0 18.59GB 64GB 60.80GB 42.21GB 69% 14.91GB 26% 14.91GB 26% 6.17GB 0B 0% 57.13GB 94% - 43.84GB 0B 0%
ローカルにもコピーしていたファイルをFSxNのボリュームにコピーします。
$ sudo cp /home/ec2-user/test_file_6 /mnt/fsxn/vol1/test_file_6_copy
コピーしたタイミングでインライン重複排除がほとんど効いていないことを確認します。
::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-sizevserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------ --------------- -------------- ----------------- fsx vol1 Enabled Idle for 00:01:59 29.39GB 6% 39.96MB 47.89GB ::*> volume show -volume vol1 -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- ---- --------- --------------- ------- ------- ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- fsx vol1 64GB 0 14.53GB 64GB 60.80GB 46.27GB 76% 14.92GB 24% 14.92GB 24% 6.17GB 0B 0% 61.18GB 101% - 47.89GB 0B 0%
-scan-old-data
オプションを付与せずにポストプロセス重複排除を実行します。
::*> volume efficiency start -volume vol1 The efficiency operation for volume "vol1" of Vserver "fsx" has started. FsxId04076278992c2097a::*> FsxId04076278992c2097a::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-sizevserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ------------------- ------------ --------------- -------------- ----------------- fsx vol1 Enabled 8094780 KB Searched 29.39GB 0% 39.99MB 47.89GB FsxId04076278992c2097a::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress,last-op-size vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- --------------------- ------------ --------------- -------------- ----------------- fsx vol1 Enabled 3421164 KB (81%) Done 29.39GB 0% 39.99MB 47.90GB vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------ --------------- -------------- ----------------- fsx vol1 Enabled Idle for 00:00:01 4.00GB 0% 0B 47.94GB ::*> volume efficiency show -volume vol1 -instance Vserver Name: fsx Volume Name: vol1 Volume Path: /vol/vol1 State: Enabled Auto State: Auto Status: Idle Progress: Idle for 00:02:55 Type: Regular Schedule: - Efficiency Policy Name: auto Efficiency Policy UUID: 9dc35dc6-94c7-11ee-a4d2-3159877d4601 Optimization: space-saving Min Blocks Shared: 1 Blocks Skipped Sharing: 0 Last Operation State: Success Last Success Operation Begin: Mon Dec 11 03:24:17 2023 Last Success Operation End: Mon Dec 11 03:25:39 2023 Last Operation Begin: Mon Dec 11 03:24:17 2023 Last Operation End: Mon Dec 11 03:25:39 2023 Last Operation Size: 4.00GB Last Operation Error: - Operation Frequency: Once approxmiately every 0 day(s) and 5 hour(s) Changelog Usage: 0% Changelog Size: 0B Vault transfer log Size: 0B Compression Changelog Size: 0B Changelog Overflow: 0B Logical Data Size: 47.94GB Logical Data Limit: 1.25PB Logical Data Percent: 0% Queued Job: - Stale Fingerprint Percentage: 0 Stage: Done Checkpoint Time: No Checkpoint Checkpoint Operation Type: - Checkpoint Stage: - Checkpoint Substage: - Checkpoint Progress: - Fingerprints Gathered: 0 Blocks Processed For Compression: 0 Gathering Begin: - Gathering Phase 2 Begin: - Fingerprints Sorted: 16068178 Duplicate Blocks Found: 1048302 Sorting Begin: Mon Dec 11 03:24:17 UTC 2023 Blocks Deduplicated: 1048302 Blocks Snapshot Crunched: 0 De-duping Begin: Mon Dec 11 03:24:43 UTC 2023 Fingerprints Deleted: 7641937 Checking Begin: - Compression: false Inline Compression: true Application IO Size: auto Compression Type: adaptive Storage Efficiency Mode: efficient Verify Trigger Rate: 20 Total Verify Time: 00:00:24 Verify Suspend Count: - Constituent Volume: false Total Sorted Blocks: 16068178 Same FP Count: 1048302 Same FBN: 0 Same Data: 1048302 No Op: 0 Same VBN: 0 Mismatched Data: 0 Same Sharing Records: 0 Max RefCount Hits: 0 Stale Recipient Count: 0 Stale Donor Count: 0 VBN Absent Count: 0 Num Out Of Space: 0 Mismatch Due To Overwrites: 0 Stale Auxiliary Recipient Count: 0 Stale Auxiliary Recipient Block Count: 0 Mismatched Recipient Block Pointers: 0 Unattempted Auxiliary Recipient Share: 0 Skip Share Blocks Delta: 0 Skip Share Blocks Upper: 0 Inline Dedupe: true Data Compaction: true Cross Volume Inline Deduplication: false Compression Algorithm: lzopro Cross Volume Background Deduplication: false Extended Compressed Data: true Volume has auto adaptive compression savings: true Volume doing auto adaptive compression: true auto adaptive compression on existing volume: false File compression application IO Size: - Compression Algorithm List: lzopro Compression Begin Time: Mon Dec 11 03:25:26 UTC 2023 Number of L1s processed by compression phase: 35812 Number of indirect blocks skipped by compression phase: L1: 26789 L2: 36 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Volume Has Extended Auto Adaptive Compression: true ::*> volume show -volume vol1 -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- ---- --------- --------------- ------- ------- ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- fsx vol1 64GB 0 18.53GB 64GB 60.80GB 42.27GB 69% 18.95GB 31% 18.95GB 31% 10.21GB 0B 0% 61.22GB 101% - 47.94GB 0B 0%
重複排除量が14.92GBから18.95GBになりました。
確かに、以前のポストプロセス重複排除でスキャン済みのデータブロックとの重複排除が効いていそうです。
つまりは、過去のデータブロックとの重複排除をするために-scan-old-data
を付与してポストプロセス重複排除を頻繁に実行する必要はなさそうです。
ポストプロセス圧縮の確認
設定変更
次に、ポストプロセス圧縮の確認をします。
ここでいう、ポストプロセス圧縮はInactive data compressionのことを指します。
まず、デフォルトの設定を確認します。
::*> volume efficiency inactive-data-compression show -volume vol1 -instance Volume: vol1 Vserver: fsx Is Enabled: false Scan Mode: - Progress: IDLE Status: FAILURE Compression Algorithm: lzopro Failure Reason: Inactive data compression disabled on volume Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 0 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 0 Time since Last Inactive Data Compression Scan ended(sec): 0 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 0 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 14 Threshold Upper Limit: 21 Threshold Lower Limit: 14 Client Read history window: 14 Incompressible Data Percentage: 0%
デフォルトだと最後にアクセスされてから14日〜21経過したデータブロックが圧縮されそうです。
流石に14日も待てないので1日に変更します。
::*> volume efficiency inactive-data-compression modify -volume vol1 ? [ -vserver <vserver name> ] *Vserver Name (default: fsx) [ -progress <text> ] *Progress [ -status <text> ] *Status [ -failure-reason <text> ] *Failure Reason [ -total-blocks <integer> ] *Total Blocks to be Processed [ -total-processed <integer> ] *Total Blocks Processed [ -percentage <percent> ] *Progress [ -is-enabled {true|false} ] *State of Inactive Data Compression on the Volume [ -threshold-days <integer> ] *Inactive data compression scan threshold days value [ -threshold-days-min <integer> ] *Inactive data compression scan threshold minimum allowed value. [ -threshold-days-max <integer> ] *Inactive data compression scan threshold maximum allowed value. [ -read-history-window-size <integer> ] *Time window(in days) for which client reads data is collected for tuning. [ -tuning-enabled {true|false} ] *State of auto-tuning of Inactive data compression scan on volume. [ -compression-algorithm {lzopro|zstd} ] *Inactive data compression algorithm ::*> volume efficiency inactive-data-compression modify -volume vol1 -is-enabled true -threshold-days 1 -threshold-days-min 1 Error: command failed: Failed to modify the state of inactive data compression on volume "vol1" in Vserver "fsx". Reason: "Invalid option combination specified for inactive data compression scan tuning. "threshold-days", "threshold-days-min", and "threshold-days-max" must always be specified together. " ::*> volume efficiency inactive-data-compression modify -volume vol1 -is-enabled true -threshold-days 1 -threshold-days-min 1 -threshold-days-max 21 ::*> volume efficiency inactive-data-compression show -volume vol1 -instance Volume: vol1 Vserver: fsx Is Enabled: true Scan Mode: - Progress: IDLE Status: FAILURE Compression Algorithm: lzopro Failure Reason: Inactive data compression disabled on volume Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 0 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 0 Time since Last Inactive Data Compression Scan ended(sec): 0 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 0 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 21 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 0%
変更できました。閾値を変更する際はthreshold-days
とthreshold-days-min
、threshold-days-max
を同時に指定する必要があるようです。
このまま放置します。現在の時刻は12/7 9:23です。
放置するにあたってaggregateやボリュームの情報を確認しておきます。
::*> date show (cluster date show) Node Date Time zone --------- ------------------------- ------------------------- FsxId04076278992c2097a-01 12/7/2023 09:23:56 +00:00 Etc/UTC FsxId04076278992c2097a-02 12/7/2023 09:23:56 +00:00 Etc/UTC 2 entries were displayed. ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId04076278992c2097a-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 35.69GB Total Physical Used: 24.50GB Total Storage Efficiency Ratio: 1.46:1 Total Data Reduction Logical Used Without Snapshots: 35.68GB Total Data Reduction Physical Used Without Snapshots: 24.50GB Total Data Reduction Efficiency Ratio Without Snapshots: 1.46:1 Total Data Reduction Logical Used without snapshots and flexclones: 35.68GB Total Data Reduction Physical Used without snapshots and flexclones: 24.50GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.46:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 35.89GB Total Physical Used in FabricPool Performance Tier: 24.92GB Total FabricPool Performance Tier Storage Efficiency Ratio: 1.44:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 35.89GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 24.92GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.44:1 Logical Space Used for All Volumes: 35.68GB Physical Space Used for All Volumes: 24.76GB Space Saved by Volume Deduplication: 10.90GB Space Saved by Volume Deduplication and pattern detection: 10.92GB Volume Deduplication Savings ratio: 1.44:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 17.95MB Volume Data Reduction SE Ratio: 1.44:1 Logical Space Used by the Aggregate: 24.83GB Physical Space Used by the Aggregate: 24.50GB Space Saved by Aggregate Data Reduction: 340.1MB Aggregate Data Reduction SE Ratio: 1.01:1 Logical Size Used by Snapshot Copies: 1.52MB Physical Size Used by Snapshot Copies: 584KB Snapshot Volume Data Reduction Ratio: 2.67:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 2.67:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 0 ::*> volume show-footprint -volume vol1 -instance Vserver: fsx Volume Name: vol1 Volume MSID: 2154998956 Volume DSID: 1026 Vserver UUID: 9cb1f92a-94c7-11ee-a274-2d5895636ad1 Aggregate Name: aggr1 Aggregate UUID: f75e2e02-94c6-11ee-a274-2d5895636ad1 Hostname: FsxId04076278992c2097a-01 Tape Backup Metadata Footprint: - Tape Backup Metadata Footprint Percent: - Deduplication Footprint: 145.2MB Deduplication Footprint Percent: 0% Temporary Deduplication Footprint: - Temporary Deduplication Footprint Percent: - Cross Volume Deduplication Footprint: - Cross Volume Deduplication Footprint Percent: - Cross Volume Temporary Deduplication Footprint: - Cross Volume Temporary Deduplication Footprint Percent: - Volume Data Footprint: 24.97GB Volume Data Footprint Percent: 3% Flexible Volume Metadata Footprint: 214.9MB Flexible Volume Metadata Footprint Percent: 0% Delayed Free Blocks: 551.2MB Delayed Free Blocks Percent: 0% SnapMirror Destination Footprint: - SnapMirror Destination Footprint Percent: - Volume Guarantee: 0B Volume Guarantee Percent: 0% File Operation Metadata: 4KB File Operation Metadata Percent: 0% Total Footprint: 25.86GB Total Footprint Percent: 3% Containing Aggregate Size: 907.1GB Name for bin0: Performance Tier Volume Footprint for bin0: 25.50GB Volume Footprint bin0 Percent: 100% Name for bin1: FSxFabricpoolObjectStore Volume Footprint for bin1: 0B Volume Footprint bin1 Percent: 0% Total Deduplication Footprint: 145.2MB Total Deduplication Footprint Percent: 0% Footprint Data Reduction by Auto Adaptive Compression: - Footprint Data Reduction by Auto Adaptive Compression Percent: - Total Footprint Data Reduction: - Total Footprint Data Reduction Percent: - Footprint Data Reduction by Capacity Tier: - Footprint Data Reduction by Capacity Tier Percent: - Effective Total after Footprint Data Reduction: 25.86GB Effective Total after Footprint Data Reduction Percent: 3% Footprint Data Reduction by Compaction: - Footprint Data Reduction by Compaction Percent: -
なお、TSSEになってから圧縮はボリュームレベルではなく、aggregateレベルで実行されるようになりました。そのため、volume efficiency
では圧縮量を確認することはできません。volume show-footprint
で確認するようにしましょう。
- Compression is not achieved at the volume level anymore, earlier compression savings were displayed through the df command's -S option which is deprecated going forward and will always show 0 for compression.
- On TSSE-enabled volumes, all compression savings are reflected at the aggregate layer.
- Data compression, Data compaction and Cross volume dedupe savings are coalesced in Aggregate Data Reduction SE Ratio reported by the aggregate show-efficiency command.
また、FSxNにおいてInactive data compressionの実行間隔は24時間で固定です。
これは、先述のとおり、2023/12/8時点のFSxNではnode run
コマンドを実行することができず、tsse.cds_allow_threshold_hours
を変更することができないためです。
23時間経過
23時間経過しました。状態を確認してみましょう。
::*> date show (cluster date show) Node Date Time zone --------- ------------------------- ------------------------- FsxId04076278992c2097a-01 12/8/2023 08:04:50 +00:00 Etc/UTC FsxId04076278992c2097a-02 12/8/2023 08:04:50 +00:00 Etc/UTC 2 entries were displayed. ::*> volume efficiency inactive-data-compression show -instance Volume: vol1 Vserver: fsx Is Enabled: true Scan Mode: - Progress: IDLE Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 0 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 0 Time since Last Inactive Data Compression Scan ended(sec): 4945 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 0 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 21 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 0%
4,945秒前に動いたようです。Inactive data compressionを有効にしてからきっかり24時間後に動く訳ではなそうです。
ただし、Number of Cold Blocks Encountered
は0のままです。
テスト用ファイルの書き込みをしてからは24時間は経過しているのでCold Blockとして判定されても良いと思いましたが、されませんでした。
aggregateやボリュームの情報を確認します。
::*> volume show -volume vol1 -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- ---- --------- --------------- ------- ------- ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- fsx vol1 64GB 0 35.83GB 64GB 60.80GB 24.97GB 41% 10.92GB 30% 10.92GB 30% 6.18GB 0B 0% 35.89GB 59% - 35.89GB 0B 0% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId04076278992c2097a-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 71.18GB Total Physical Used: 24.36GB Total Storage Efficiency Ratio: 2.92:1 Total Data Reduction Logical Used Without Snapshots: 35.28GB Total Data Reduction Physical Used Without Snapshots: 24.36GB Total Data Reduction Efficiency Ratio Without Snapshots: 1.45:1 Total Data Reduction Logical Used without snapshots and flexclones: 35.28GB Total Data Reduction Physical Used without snapshots and flexclones: 24.36GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.45:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 71.79GB Total Physical Used in FabricPool Performance Tier: 25.20GB Total FabricPool Performance Tier Storage Efficiency Ratio: 2.85:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 35.89GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 25.20GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.42:1 Logical Space Used for All Volumes: 35.28GB Physical Space Used for All Volumes: 24.36GB Space Saved by Volume Deduplication: 10.90GB Space Saved by Volume Deduplication and pattern detection: 10.92GB Volume Deduplication Savings ratio: 1.45:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 17.95MB Volume Data Reduction SE Ratio: 1.45:1 Logical Space Used by the Aggregate: 24.70GB Physical Space Used by the Aggregate: 24.36GB Space Saved by Aggregate Data Reduction: 340.1MB Aggregate Data Reduction SE Ratio: 1.01:1 Logical Size Used by Snapshot Copies: 35.90GB Physical Size Used by Snapshot Copies: 1.37MB Snapshot Volume Data Reduction Ratio: 26889.95:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 26889.95:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 0 ::*> volume show-footprint -volume vol1 -instance Vserver: fsx Volume Name: vol1 Volume MSID: 2154998956 Volume DSID: 1026 Vserver UUID: 9cb1f92a-94c7-11ee-a274-2d5895636ad1 Aggregate Name: aggr1 Aggregate UUID: f75e2e02-94c6-11ee-a274-2d5895636ad1 Hostname: FsxId04076278992c2097a-01 Tape Backup Metadata Footprint: - Tape Backup Metadata Footprint Percent: - Deduplication Footprint: 145.2MB Deduplication Footprint Percent: 0% Temporary Deduplication Footprint: - Temporary Deduplication Footprint Percent: - Cross Volume Deduplication Footprint: - Cross Volume Deduplication Footprint Percent: - Cross Volume Temporary Deduplication Footprint: - Cross Volume Temporary Deduplication Footprint Percent: - Volume Data Footprint: 24.97GB Volume Data Footprint Percent: 3% Flexible Volume Metadata Footprint: 214.9MB Flexible Volume Metadata Footprint Percent: 0% Delayed Free Blocks: 552.2MB Delayed Free Blocks Percent: 0% SnapMirror Destination Footprint: - SnapMirror Destination Footprint Percent: - Volume Guarantee: 0B Volume Guarantee Percent: 0% File Operation Metadata: 4KB File Operation Metadata Percent: 0% Total Footprint: 25.86GB Total Footprint Percent: 3% Containing Aggregate Size: 907.1GB Name for bin0: Performance Tier Volume Footprint for bin0: 25.51GB Volume Footprint bin0 Percent: 100% Name for bin1: FSxFabricpoolObjectStore Volume Footprint for bin1: 0B Volume Footprint bin1 Percent: 0% Total Deduplication Footprint: 145.2MB Total Deduplication Footprint Percent: 0% Footprint Data Reduction by Auto Adaptive Compression: 335.2MB Footprint Data Reduction by Auto Adaptive Compression Percent: 0% Total Footprint Data Reduction: 335.2MB Total Footprint Data Reduction Percent: 0% Footprint Data Reduction by Capacity Tier: - Footprint Data Reduction by Capacity Tier Percent: - Effective Total after Footprint Data Reduction: 25.53GB Effective Total after Footprint Data Reduction Percent: 3% Footprint Data Reduction by Compaction: - Footprint Data Reduction by Compaction Percent: -
Footprint Data Reduction by Auto Adaptive Compression: 335.2MB
になっていました。
こちらの値は23時間前は-
でした。そして、この間何も書き込みなどの操作は行なっておりません。
以上のことから、Inactive data compressionにより、データブロックが335.2MB圧縮されたことが分かります。
Auto adaptive compressionの計算
もしかすると、volume efficiency inactive-data-compression show
のNumber of Cold Blocks Encountered
は手動でInactive data compressionした際の値を表示するのかもしれません。
ドキュメントを確認すると、手動でInactive data compressionする際に-scan-mode compute_compression_savings
オプションを指定すると、圧縮量を計算してくれるようでした。
[-m, -scan-mode {default|compute_compression_savings|extended_recompression}] - scanner mode
This specifies in which mode inactive data compression scanner should be started. Three modes available 'default', 'compute_compression_savings' and 'extended_recompression'. 'default' scanner will start compressing the cold data in volume. 'compute_compression_savings' scanner will calculate the auto adaptive compression savings on the volume. 'extended_recompression' scanner will attempt to re-write existing cold data to reduce internal fragmentation.
実際に試してみましょう。
::*> volume efficiency inactive-data-compression start -volume vol1 -scan-mode compute_compression_savings Inactive data compression scan started on volume "vol1" in Vserver "fsx" ::*> volume efficiency inactive-data-compression show -instance Volume: vol1 Vserver: fsx Is Enabled: true Scan Mode: compute_compression_savings Progress: RUNNING Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: 16% Phase1 L1s Processed: 0 Phase1 Lns Skipped: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Phase2 Total Blocks: 11004608 Phase2 Blocks Processed: 1790862 Number of Cold Blocks Encountered: 0 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 0 Time since Last Inactive Data Compression Scan ended(sec): 5416 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 0 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 21 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 0% ::*> volume efficiency inactive-data-compression show -instance Volume: vol1 Vserver: fsx Is Enabled: true Scan Mode: compute_compression_savings Progress: RUNNING Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: 93% Phase1 L1s Processed: 0 Phase1 Lns Skipped: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Phase2 Total Blocks: 11004608 Phase2 Blocks Processed: 10264556 Number of Cold Blocks Encountered: 0 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 0 Time since Last Inactive Data Compression Scan ended(sec): 5422 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 0 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 21 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 0% ::*> volume efficiency inactive-data-compression show -instance Volume: vol1 Vserver: fsx Is Enabled: true Scan Mode: - Progress: IDLE Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 0 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 0 Time since Last Inactive Data Compression Scan ended(sec): 1 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 0 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 21 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 0%
Number of Cold Blocks Encountered
は特に変化なく、レポートがされる訳でもありませんでした。
1時間時間をおいて、再度実行します。
::*> date show (cluster date show) Node Date Time zone --------- ------------------------- ------------------------- FsxId04076278992c2097a-01 12/8/2023 09:06:27 +00:00 Etc/UTC FsxId04076278992c2097a-02 12/8/2023 09:06:27 +00:00 Etc/UTC 2 entries were displayed. ::*> volume efficiency inactive-data-compression start -volume vol1 -scan-mode compute_compression_savings Inactive data compression scan started on volume "vol1" in Vserver "fsx" ::*> volume efficiency inactive-data-compression show -instance Volume: vol1 Vserver: fsx Is Enabled: true Scan Mode: compute_compression_savings Progress: RUNNING Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: 59% Phase1 L1s Processed: 0 Phase1 Lns Skipped: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Phase2 Total Blocks: 11004608 Phase2 Blocks Processed: 6534506 Number of Cold Blocks Encountered: 0 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 0 Time since Last Inactive Data Compression Scan ended(sec): 3123 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 0 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 21 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 0% ::*> volume efficiency inactive-data-compression show -instance Volume: vol1 Vserver: fsx Is Enabled: true Scan Mode: compute_compression_savings Progress: RUNNING Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: 81% Phase1 L1s Processed: 0 Phase1 Lns Skipped: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Phase2 Total Blocks: 11004608 Phase2 Blocks Processed: 8948736 Number of Cold Blocks Encountered: 0 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 0 Time since Last Inactive Data Compression Scan ended(sec): 3125 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 0 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 21 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 0% ::*> volume efficiency inactive-data-compression show -instance Volume: vol1 Vserver: fsx Is Enabled: true Scan Mode: compute_compression_savings Progress: RUNNING Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: 99% Phase1 L1s Processed: 0 Phase1 Lns Skipped: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Phase2 Total Blocks: 11004608 Phase2 Blocks Processed: 10901224 Number of Cold Blocks Encountered: 0 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 0 Time since Last Inactive Data Compression Scan ended(sec): 3127 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 0 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 21 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 0% ::*> volume efficiency inactive-data-compression show -instance Volume: vol1 Vserver: fsx Is Enabled: true Scan Mode: - Progress: IDLE Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 0 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 0 Time since Last Inactive Data Compression Scan ended(sec): 0 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 0 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 21 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 0%
やはり、特に変わりありません。
48時間経過後
48時間後に再度確認します。
::*> date show (cluster date show) Node Date Time zone --------- ------------------------- ------------------------- FsxId04076278992c2097a-01 12/9/2023 09:31:21 +00:00 Etc/UTC FsxId04076278992c2097a-02 12/9/2023 09:31:21 +00:00 Etc/UTC 2 entries were displayed. ::*> volume efficiency inactive-data-compression show -instance Volume: vol1 Vserver: fsx Is Enabled: true Scan Mode: - Progress: IDLE Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 0 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 0 Time since Last Inactive Data Compression Scan ended(sec): 85456 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 0 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 21 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 0%
85,456秒前に実行されたようで、もう少しで24時間経つところです。
少し時間をおいて再度確認します。
::*> date show (cluster date show) Node Date Time zone --------- ------------------------- ------------------------- FsxId04076278992c2097a-01 12/9/2023 09:57:17 +00:00 Etc/UTC FsxId04076278992c2097a-02 12/9/2023 09:57:17 +00:00 Etc/UTC 2 entries were displayed. ::*> volume efficiency inactive-data-compression show -instance Volume: vol1 Vserver: fsx Is Enabled: true Scan Mode: - Progress: IDLE Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 0 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 0 Time since Last Inactive Data Compression Scan ended(sec): 530 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 0 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 21 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 0%
530秒前に実行されたようですね。ただし、やはりNumber of Cold Blocks Encountered
などの変化はありません。
-scan-mode compute_compression_savings
で圧縮量を計算してみます。
::*> volume efficiency inactive-data-compression start -volume vol1 -scan-mode compute_compression_savings Inactive data compression scan started on volume "vol1" in Vserver "fsx" ::*> volume efficiency inactive-data-compression show -instance Volume: vol1 Vserver: fsx Is Enabled: true Scan Mode: compute_compression_savings Progress: RUNNING Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: 70% Phase1 L1s Processed: 0 Phase1 Lns Skipped: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Phase2 Total Blocks: 11004608 Phase2 Blocks Processed: 7686011 Number of Cold Blocks Encountered: 0 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 0 Time since Last Inactive Data Compression Scan ended(sec): 607 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 0 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 21 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 0% ::*> volume efficiency inactive-data-compression show -instance Volume: vol1 Vserver: fsx Is Enabled: true Scan Mode: - Progress: IDLE Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 0 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 0 Time since Last Inactive Data Compression Scan ended(sec): 7 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 0 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 21 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 0% ::*> volume show -volume vol1 -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- ---- --------- --------------- ------- ------- ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- fsx vol1 64GB 0 35.83GB 64GB 60.80GB 24.97GB 41% 10.92GB 30% 10.92GB 30% 6.18GB 0B 0% 35.89GB 59% - 35.89GB 0B 0%
特に変わりありませんね。
手動でポストプロセス圧縮の実行
手動でポストプロセス圧縮(Inactive data compression)の実行を行います。
::*> volume efficiency inactive-data-compression start -volume vol1 Inactive data compression scan started on volume "vol1" in Vserver "fsx" ::*> volume efficiency inactive-data-compression show -instance Volume: vol1 Vserver: fsx Is Enabled: true Scan Mode: default Progress: RUNNING Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: 0% Phase1 L1s Processed: 674 Phase1 Lns Skipped: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Phase2 Total Blocks: 0 Phase2 Blocks Processed: 0 Number of Cold Blocks Encountered: 73872 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 60352 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 0 Time since Last Inactive Data Compression Scan ended(sec): 0 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 0 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 21 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 0% ::*> volume efficiency inactive-data-compression show -instance Volume: vol1 Vserver: fsx Is Enabled: true Scan Mode: default Progress: RUNNING Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: 0% Phase1 L1s Processed: 22954 Phase1 Lns Skipped: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Phase2 Total Blocks: 0 Phase2 Blocks Processed: 0 Number of Cold Blocks Encountered: 644152 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 268840 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 0 Time since Last Inactive Data Compression Scan ended(sec): 0 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 0 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 21 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 0% ::*> volume efficiency inactive-data-compression show -instance Volume: vol1 Vserver: fsx Is Enabled: true Scan Mode: default Progress: RUNNING Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: 0% Phase1 L1s Processed: 42073 Phase1 Lns Skipped: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Phase2 Total Blocks: 0 Phase2 Blocks Processed: 0 Number of Cold Blocks Encountered: 3236128 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 268872 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 0 Time since Last Inactive Data Compression Scan ended(sec): 0 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 0 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 21 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 0% ::*> volume efficiency inactive-data-compression show -instance Volume: vol1 Vserver: fsx Is Enabled: true Scan Mode: - Progress: IDLE Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 4026232 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 268872 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 0 Time since Last Inactive Data Compression Scan ended(sec): 0 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 0 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 21 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 0% ::*> volume efficiency inactive-data-compression show -instance Volume: vol1 Vserver: fsx Is Enabled: true Scan Mode: default Progress: RUNNING Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: 0% Phase1 L1s Processed: 46415 Phase1 Lns Skipped: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Phase2 Total Blocks: 0 Phase2 Blocks Processed: 0 Number of Cold Blocks Encountered: 4320304 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 268872 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 0 Time since Last Inactive Data Compression Scan ended(sec): 0 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 0 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 21 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 0% ::*> volume efficiency inactive-data-compression show -instance Volume: vol1 Vserver: fsx Is Enabled: true Scan Mode: default Progress: RUNNING Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: 0% Phase1 L1s Processed: 49578 Phase1 Lns Skipped: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Phase2 Total Blocks: 0 Phase2 Blocks Processed: 0 Number of Cold Blocks Encountered: 5126952 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 268872 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 0 Time since Last Inactive Data Compression Scan ended(sec): 0 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 0 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 21 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 0% ::*> volume efficiency inactive-data-compression show -instance Volume: vol1 Vserver: fsx Is Enabled: true Scan Mode: default Progress: RUNNING Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: 16% Phase1 L1s Processed: 54540 Phase1 Lns Skipped: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Phase2 Total Blocks: 11004608 Phase2 Blocks Processed: 1765122 Number of Cold Blocks Encountered: 6153624 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 268872 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 139 Time since Last Inactive Data Compression Scan ended(sec): 17 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 17 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 21 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 0% ::*> volume efficiency inactive-data-compression show -instance Volume: vol1 Vserver: fsx Is Enabled: true Scan Mode: - Progress: IDLE Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 6222880 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 296240 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 46 Time since Last Inactive Data Compression Scan ended(sec): 0 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 0 Average time for Cold Data Compression(sec): 46 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 21 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 89%
296,240個のデータブロックが圧縮されたようです。
aggregateやボリュームの情報を確認します。
::*> volume show -volume vol1 -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- ---- --------- --------------- ------- ------- ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- fsx vol1 64GB 0 35.83GB 64GB 60.80GB 24.97GB 41% 10.92GB 30% 10.92GB 30% 6.18GB 0B 0% 35.89GB 59% - 35.89GB 0B 0% ::*> volume show-footprint -volume vol1 -instance Vserver: fsx Volume Name: vol1 Volume MSID: 2154998956 Volume DSID: 1026 Vserver UUID: 9cb1f92a-94c7-11ee-a274-2d5895636ad1 Aggregate Name: aggr1 Aggregate UUID: f75e2e02-94c6-11ee-a274-2d5895636ad1 Hostname: FsxId04076278992c2097a-01 Tape Backup Metadata Footprint: - Tape Backup Metadata Footprint Percent: - Deduplication Footprint: 145.2MB Deduplication Footprint Percent: 0% Temporary Deduplication Footprint: - Temporary Deduplication Footprint Percent: - Cross Volume Deduplication Footprint: - Cross Volume Deduplication Footprint Percent: - Cross Volume Temporary Deduplication Footprint: - Cross Volume Temporary Deduplication Footprint Percent: - Volume Data Footprint: 25.07GB Volume Data Footprint Percent: 3% Flexible Volume Metadata Footprint: 214.9MB Flexible Volume Metadata Footprint Percent: 0% Delayed Free Blocks: 557.7MB Delayed Free Blocks Percent: 0% SnapMirror Destination Footprint: - SnapMirror Destination Footprint Percent: - Volume Guarantee: 0B Volume Guarantee Percent: 0% File Operation Metadata: 4KB File Operation Metadata Percent: 0% Total Footprint: 25.97GB Total Footprint Percent: 3% Containing Aggregate Size: 907.1GB Name for bin0: Performance Tier Volume Footprint for bin0: 25.62GB Volume Footprint bin0 Percent: 100% Name for bin1: FSxFabricpoolObjectStore Volume Footprint for bin1: 0B Volume Footprint bin1 Percent: 0% Total Deduplication Footprint: 145.2MB Total Deduplication Footprint Percent: 0% Footprint Data Reduction by Auto Adaptive Compression: 564.7MB Footprint Data Reduction by Auto Adaptive Compression Percent: 0% Total Footprint Data Reduction: 564.7MB Total Footprint Data Reduction Percent: 0% Footprint Data Reduction by Capacity Tier: - Footprint Data Reduction by Capacity Tier Percent: - Effective Total after Footprint Data Reduction: 25.42GB Effective Total after Footprint Data Reduction Percent: 3% Footprint Data Reduction by Compaction: - Footprint Data Reduction by Compaction Percent: - ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId04076278992c2097a-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 71.12GB Total Physical Used: 25.27GB Total Storage Efficiency Ratio: 2.81:1 Total Data Reduction Logical Used Without Snapshots: 35.20GB Total Data Reduction Physical Used Without Snapshots: 25.16GB Total Data Reduction Efficiency Ratio Without Snapshots: 1.40:1 Total Data Reduction Logical Used without snapshots and flexclones: 35.20GB Total Data Reduction Physical Used without snapshots and flexclones: 25.16GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.40:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 71.81GB Total Physical Used in FabricPool Performance Tier: 26.18GB Total FabricPool Performance Tier Storage Efficiency Ratio: 2.74:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 35.89GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 26.08GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.38:1 Logical Space Used for All Volumes: 35.20GB Physical Space Used for All Volumes: 24.28GB Space Saved by Volume Deduplication: 10.90GB Space Saved by Volume Deduplication and pattern detection: 10.92GB Volume Deduplication Savings ratio: 1.45:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 17.95MB Volume Data Reduction SE Ratio: 1.45:1 Logical Space Used by the Aggregate: 26.11GB Physical Space Used by the Aggregate: 25.27GB Space Saved by Aggregate Data Reduction: 861.7MB Aggregate Data Reduction SE Ratio: 1.03:1 Logical Size Used by Snapshot Copies: 35.92GB Physical Size Used by Snapshot Copies: 110.0MB Snapshot Volume Data Reduction Ratio: 334.35:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 334.35:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 0
Footprint Data Reduction by Auto Adaptive Compression: 564.7MB
であることから200MB超追加で圧縮されたようです。
手動でポストプロセス圧縮の実行をしたタイミングでのディスクスループットやディスクIOPS、CPU使用率は以下のとおりです。
-scan-old-data
を付与してポストプロセス圧縮を実行した時と同様に、いずれのメトリクスも大きく跳ねています。実質フルスキャンのようなものなので、それなりに負荷はかかりそうですね。
コンパクションの確認
次に、コンパクションの確認です。
と行きたいところですが、volume show-footprint
の出力結果のFootprint Data Reduction by Compaction
は-
でした。そして、この間何も書き込みなどの操作は行なっておりません。
aggr show-efficiency
にもコンパクションによる削減量を示す値はありません。
このことからコンパクションで具体的にいくらデータを削減したのかを把握するのは難しいと考えます。
Amazon FSx for NetApp ONTAPにおけるTSSEを整理してみた
Amazon FSx for NetApp ONTAPにおけるTSSEを整理してみました。
インターネット上に公開されているドキュメントになかなか情報がなく、実際に検証しながら動きを確認しました。かなり沼が深いので、深掘り甲斐があります。
物理データ量を削減されたい場合は、TSSEの動きを理解することが必須かと思います。
特にポストプロセス圧縮を効かせたい場合は、ボリューム作成後にInactive data compressionを有効にするのを忘れずにしましょう。
この記事が誰かの助けになれば幸いです。
以上、AWS事業本部 コンサルティング部の のんピ(@non____97)でした!